text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Differential functional regulation of protein kinase C (PKC) orthologs in fission yeast The two PKC orthologs Pck1 and Pck2 in the fission yeast Schizosaccharomyces pombe operate in a redundant fashion to control essential functions, including morphogenesis and cell wall biosynthesis, as well as the activity of the cell integrity pathway and its core element, the MAPK Pmk1. We show here that, despite the strong structural similarity and functional redundancy of these two enzymes, the mechanisms regulating their maturation, activation, and stabilization have a remarkably distinct biological impact on both kinases. We found that, in contrast to Pck2, putative in vivo phosphorylation of Pck1 within the conserved activation loop, turn, and hydrophobic motifs is essential for Pck1 stability and biological functions. Constitutive Pck activation promoted dephosphorylation and destabilization of Pck2, whereas it enhanced Pck1 levels to interfere with proper downstream signaling to the cell integrity pathway via Pck2. Importantly, although catalytic activity was essential for Pck1 function, Pck2 remained partially functional independent of its catalytic activity. Our findings suggest that early divergence from a common ancestor in fission yeast involved important changes in the mechanisms regulating catalytic activation and stability of PKC family members to allow for flexible and dynamic control of downstream functions, including MAPK signaling. The protein kinase C family of isozymes plays essential roles in signaling pathways controlling cell growth, proliferation, differentiation, and cell death (1,2). Classical, novel, and atypical mammalian PKC isoforms share a basic structure with a variable N-terminal regulatory domain followed by a highly conserved C-terminal kinase domain, which contains three conserved phosphorylation sites critical for catalytic activity: the activation loop (AL), 3 turn motif (TM), and hydrophobic motif (HM) (1,2). AL phosphorylation is essential for the catalytic activation of most PKC isoforms and is carried out by 3-phosphoinositide-dependent kinase 1 (PDK-1) (1,2). The mammalian target of rapamycin (mTOR) or an autophosphorylation mechanism mediate TM and HM phosphorylation of most PKCs, although their requirement to regulate kinase activity varies among different family members (2). Given their essential biological roles, PKC isozymes are present throughout the eukaryotic lineage, from humans to simple organisms like yeasts (3). Indeed, the budding yeast Saccharomyces cerevisiae harbors a single and essential PKC ortholog named Pkc1, whose phosphorylation at Thr 983 within AL by redundant PDKs Pkh1 and Pkh2 is indispensable for its catalytic and biological functions (4). The fission yeast Schizosaccharomyces pombe has two non-essential PKC orthologs known as Pck1 and Pck2 (5,6). Both kinases share extensive homology at their amino acid sequences, particularly within the catalytic domain (ϳ70% identity within 180 amino acids; Fig. 1). Both Pck1 and Pck2, as S. cerevisiae Pkc1, retain the regulatory C1 and C2 domains found in mammalian PKCs but present an extended regulatory domain, including two polybasic coiled-coil HR1 domains that mediate binding and regulation by the GTP-bound Rho GTPase family members Rho1 and Rho2 (Fig. 1A) (7,8). The HR1 domains in Pkc1 and Pck1/Pck2 are closely related to those present in the mammalian Rho family-responsive protein kinase N kinases (PKNs) PKN1-3, a subfamily within the PKC family that binds and become regulated by Rho family members (9). Pck1 and Pck2 are short half-life proteins that contain N-terminal proline-, glutamic acid-, serine-, and threoninerich (PEST) sequences, and their interaction with either GTP-Rho1 or GTP-Rho2 increases their stability (7,8). Rho1 and Rho2 synergistically regulate, through Pck1 and Pck2, the biosynthesis of (1, 3) ␤-D-glucan and ␣-glucan, which are the two main cell wall polymers in fission yeast (7,10). Pck1 and Pck2 share overlapping roles in cell viability and partially complement each other, and their simultaneous deletion elicits a synthetic lethal phenotype (7,10). Besides its role in controlling glucan synthesis, Pck2 is a major upstream activator of the cell integrity MAP kinase pathway (CIP). Its core component, MAP kinase Pmk1, becomes activated during growth and in response to adverse environmental conditions and regulates several processes, including cell separation, cell wall construction, and ionic homeostasis (5,6,11). Although the Rho2-Pck2 branch is essential for Pmk1 activation in response to hyper-and hypo-osmotic stress, both Rho1 and Rho2 target Pck2 to activate the CIP during vegeta-tive growth and cell wall damage (12) (Fig. 1B). The PDK ortholog Ksg1 and an autophosphorylation mechanism are responsible for the in vivo phosphorylation of Pck2 at the conserved Thr 842 within the AL during vegetative growth and understress (13).Theseevents,togetherwithturnmotifautophosphorylation at Thr 984 and binding to Rho1 and/or Rho2, stabilize and render Pck2 competent to exert its biological functions, including activation of the CIP (13). In addition, we have discovered a novel mechanism involving the Akt ortholog Gad8 (a TORC2 target) and Psk1 (a TORC1 target) that promotes an increase in Pck2 protein levels to allow activation of the CIP in response to cell wall damage or glucose exhaustion (14,15). Initial observations suggested that Pck1 was a negative regulator of the CIP (16). However, later studies demonstrated that, instead, it plays a less prominent role than Pck2 as a positive regulator of Pmk1 activity during vegetative growth and cell wall stress (12,16). Nevertheless, considering their shared func-tions and strong structural similarity, it might be foreseen that the mechanisms regulating Pck1 function should be identical to those described for Pck2. Contrary to this prediction, in this work we show that, in fission yeast, the expansion of the PKC family from a single ancestor was accompanied by striking differences in the mechanisms regulating maturation, activation, and protein levels of both kinases. The early acquisition of differential regulatory activation and stabilization by Pck1 and Pck2 allows for fine-tuning of downstream MAPK signaling and regulation of cellular homeostasis in this simple organism. Pck1 is phosphorylated in vivo by Ksg1 within the AL at Thr 823 and is more stable than Pck2 The C-terminal catalytic domains of Pck1 and Pck2 are strongly conserved (Fig. 1A). We showed previously that the Distinct regulation of PKC isoforms conserved Thr 842 located at the Pck2 AL is phosphorylated in vivo by the PDK1 ortholog Ksg1 (14). The equivalent threonine residue within Pck1 AL lies at position 823 ( Fig. 1). Using a specific anti-Thr(P) 823 antibody (see "Experimental Procedures"), we detected specific in vivo phosphorylation of Pck1 at this residue (supplemental Fig. S1). Incubation with anti-Thr(P) 823 antibody revealed a strong phosphorylation signal by in vitro kinase assays performed with wild-type versions of Ksg1 and Pck1 ( Fig. 2A, lane 4). This signal was totally absent when employing the kinase-dead version of Ksg1 ( Fig. 2A, (17). In control cells, the levels of total (anti-HA antibody) and Thr(P) 823 Pck1 were approximately 10 times higher than in ksg1-208 cells either growing at the permissive temperature (25°C) or incubated at 36°C, which is a restrictive temperature for Ksg1 function (Fig. 2B). These results indicate that Ksg1 is also responsible for AL phosphorylation of Pck1 at Thr 823 in vivo, and this event might regulate Pck1 stability (see below). Figure 2. Pck1 is phosphorylated in vivo within the AL at Thr 823 by Ksg1 and is more stable than the Pck2 isoform. A, GST:pck1 (control, C) and GST:pck1-T823A fusions were incubated at 30°C for 1 h with either GST:ksg1-KD (kinase-dead mutant) or GST:ksg1 with or without ATP. Reaction mixtures were resolved and hybridized with anti-Thr(P) 823 or anti-GST antibodies. Results representative of two independent experiments are shown. B, strains MM1578 (pck1:HA, Control) and MM1724 (pck1:HA Ksg1-208) were grown at 25°C in YES medium and then incubated at 36°C for the indicated times. Cell extracts were resolved by SDS-PAGE and hybridized with anti-Thr(P) 823 and anti-HA antibodies. Relative units for total (gray columns) and Thr 823 phosphorylated (black columns) Pck1 levels were estimated with respect to the internal control (anti-cdc2 blot) at each time point. Results representative of two independent experiments are shown. C, growing cultures of strains MM1578 (pck1:HA, Control) and MM1718 (tor1⌬ pck1:HA) were treated with either 0.6 M KCl or 1 g/ml caspofungin or shifted to the same medium without glucose and supplemented with 3% glycerol. Cell extracts were resolved by SDS-PAGE and hybridized separately with anti-Thr(P) 823 and anti-HA antibodies. Relative units as mean Ϯ S.D. (biological triplicates) for total (gray columns) and Thr(P) 823 (black columns) Pck1 levels were estimated as above. *, p Ͻ 0.05; **, p Ͻ 0.005; calculated by unpaired Student´s t test. D, total (anti-HA) Pck1 and Pck2 levels were determined in growing cultures of control strains MM913 (pck2:HA) and MM1578 (pck1:HA) and after treatment with 0.6 M KCl for 30 min. Relative units as mean Ϯ S.D. (biological triplicates) for total Pck1 and Pck2 levels were estimated with respect to the internal control (anti-cdc2 blot). **, p Ͻ 0.005; calculated by unpaired Student´s t test. E, control strains MM913 and MM1578 were grown to early log phase and incubated with or without 100 g/ml cycloheximide (CHX) for the indicated times. Cell extracts were hybridized with anti-HA antibodies. Relative units for total Pck1 and Pck2 levels were estimated with respect to the internal control (anti-cdc2 blot) at each time point. Results representative of two independent experiments are shown. Pck2 protein levels increase in response to different stresses through a mechanism partially regulated by the TORC2 complex and its main component, Tor1 kinase (14,15). We found that both total and Thr(P) 823 Pck1 levels were ϳ2or 3-fold lower in growing tor1⌬ cells compared with control cells (Fig. 2C). Both Thr(P) 823 and total Pck1 levels were augmented progressively in control cells subjected to saline stress (KCl) and cell wall stress with caspofungin (a specific ␤-glucan synthase inhibitor) or after starvation from glucose (Fig. 2C). Importantly, such a rise was strongly abrogated in tor1⌬ cells during cell wall stress or glucose starvation but less evident in response to salt stress (Fig. 2C). Taken together, these findings suggest that TORC2 up-regulates Pck1 levels during growth and in response to specific stresses. Total Pck1 protein levels were ϳ2or 3-fold higher than those of Pck2 (Fig. 2D, time 0), confirming previous data on absolute proteome quantifications at the single-cell level (18). Remarkably, in contrast to Pck2, which is a relatively unstable protein with a short half-life (14), Pck1 protein levels decreased very slowly in growing cells treated with the protein synthesis inhibitor cycloheximide (Fig. 2E), implying that Pck1 is more stable than the Pck2 isoform. Putative phosphorylation at conserved AL, TM, and HM residues differentially affects Pck1 and Pck2 levels and biological functions In fission yeast, total Pck2 levels are very similar in control cells and in those expressing a non-phosphorylatable AL mutant (Pck2-T842A, Fig. 3A) (14). However, the observation that both total and Thr(P) 823 Pck1 levels are quite low in ksg1-208 cells suggests that Pck1 stability is dependent upon AL phosphorylation. Indeed, as shown in Fig. 3B, Pck1 levels were strongly reduced by ϳ95% in cells expressing the unphosphorylated AL mutant Pck1-T823A. Similar to Pck2 (14), bacterially purified or immunoprecipitated versions of wild-type and Pck1 mutants were not active either in vitro or in vivo, thus preventing direct biochemical confirmation of their kinase activity status. Instead, we tested the ability of genomic versions of wildtype or mutated alleles of Pck1 to suppress several known phenotypes of pck1⌬ cells, including defective signaling to the CIP and growth sensitivity in the presence of caspofungin, Calcofluor white, and magnesium chloride (12,16), as biological readouts to comparatively assess their function in vivo. The T842A mutation does not affect Pck2 signaling activity to the CIP or growth sensitivity in the presence of caspofungin ( Fig. 3C) (14). On the contrary, compared with control cells, pck1-T823A cells displayed a strong or moderate growth sensitive phenotype in the presence of caspofungin, magnesium chloride, and Calcofluor white (Fig. 3C). Pck1-T823A was also synthetic lethal with the pck2⌬ mutation (data not shown). Thus, in vivo, AL phosphorylation is a critical determinant for the stability and biological functions of Pck1, but not in the case of the Pck2 ortholog. We also noticed that the growth sensitivity of pck1-T823A cells to the above stressors was more pronounced than in pck1⌬ cells (Fig. 3C). Although overexpression of wild-type Pck2 alters cell morphology and inhibits fission yeast growth because of hyperactivation of the CIP, overexpression of the pck2-T842A allele is not lethal (Fig. 3D) (19). Con-trarily, overexpression of wild-type Pck1 is not lethal (12), whereas overexpression of the Pck1-T823A mutant allele induced a strong lytic phenotype and was deleterious for cell growth (Fig. 3D). Moreover, the maximal activation of the CIP core member MAPK Pmk1 in response to a salt stress, which is exclusively dependent upon Pck2 function (16), showed a modest but significant decrease in pck1-T823A cells compared with control cells (Fig. 3E). Collectively, these results suggest that Pck1 requires to be phosphorylated in vivo within the AL by Ksg1 to attain a stable and functional conformation and that failure to do so may negatively interfere with proper Pck2 signaling. Together with the canonical AL site at Thr 842 , Thr 846 has been proposed to be phosphorylated in vivo and to play a role in Pck2 catalytic activation and biological functions (14). Both total and Thr(P) 823 Pck1 levels were unaffected in cells expressing Pck1-T827A with a mutation in the amino acid residue equivalent to Pck2-T846A (Figs. 1A and 3B). However, this Pck1 mutant was as sensitive as pck1⌬ cells to Calcofluor white and magnesium chloride but not to caspofungin (Fig. 3C). Conversely, mutation of Pck1 within the conserved phosphorylatable TM site (Pck1-T965A) decreased both total and Thr(P) 823 kinase levels ( Fig. 3B) and resulted in growth sensitivity only to caspofungin (Fig. 3F). Hence, in vivo putative phosphorylation of Pck1 at Thr 827 (AL) and Thr 965 (TM) may positively influence Pck1 biological functions in specific biological contexts. Mutation at the putative conserved phosphorylation site within Pck1 HM (Pck1-S983A) did not affect Pck1 stability, AL phosphorylation, or biological functions (Fig. 3, B and F). However, in sharp contrast with Pck2, the Pck1 mutant at both the TM and HM (Pck1-T965A S983A), which is equivalent to the Pck2-T984A S1002A mutant, showed very low protein levels (ϳ80% decrease compared with control cells, Fig. 3B), and its sensitivity to different stresses was similar to that of pck1⌬ cells (Fig. 3F). Taken as a whole, these observations reinforce the idea that, contrary to Pck2, in vivo phosphorylations of Pck1 within AL, TM and HM sites are essential for protein stability and biological functions. Mutation at conserved pseudosubstrate motif has distinct effects on Pck1 and Pck2 stability and downstream signaling PKCs, including PKC␣, possess a pseudosubstrate segment (PS) that keeps the enzyme in a closed, autoinhibited conformation. This domain blocks the substrate-binding cavity and protects the priming AL, TM, and HM phosphosites within the catalytic domain from dephosphorylation (20). Deletion of the PS domain or a point mutation in the conserved alanine residue to a negatively charged glutamic acid renders the kinase constitutively active both in vivo and in vitro (21). Both Pck1 and Pck2 also harbor a PS domain with conserved alanine residues at positions 399 and 392, respectively (Fig. 4A). We found that total and Thr(P) 842 Pck2 levels were ϳ40% lower in the PS mutant (pck2-A392E) than in control cells (Fig. 4B). This finding was somehow expected because it is assumed that constitutive catalytic activation of PKCs elicits its subsequent dephosphorylation and degradation (22). Remarkably, Pmk1 phosphorylation was enhanced in growing pck2-A392E cells compared with control cells (Fig. 4C), and this feature was Distinct regulation of PKC isoforms accompanied by increased growth sensitivity to magnesium chloride (Fig. 4D), which is a known phenotype associated with increased basal Pmk1 activity (23). The increase in total Pck2 abundance displayed by control cells when treated with a cell wall synthesis inhibitor (caspofungin) or KCl (salt stress) was reduced by ϳ50 -55% in pck2-A392E cells (Fig. 4E). Pmk1 activation in the presence of caspofungin was partially defective in pck2-A392E cells (Fig. 4F), although they were not hypersensi-tive to this compound (supplemental Fig. S2). In contrast, the response to salt stress was similar to control cells expressing pck2 ϩ (Fig. 4F). Taken together, these observations support the hypothesis that constitutive activation promotes dephosphorylation and destabilization of Pck2, resulting in increased basal Pmk1 activity and limited downstream signaling to the CIP under conditions that require de novo protein synthesis, like cell wall stress (13). Contrary to Pck2, the Pck1 PS mutant (pck1-A399E) showed total and Thr(P) 823 levels that were approximately twice that of control cells expressing pck1 ϩ (Fig. 5A). Interestingly, both total and Thr(P) 823 Pck1 levels, which increase progressively in control cells in response to cell wall or salt stress, did not increase further in pck1-A399E cells under the same treatments but dropped slowly with longer incubation times (Fig. 5B). Pck2 plays a prominent role in the activation of the CIP during vegetative growth, whereas the Pck1 contribution to this response is minimal, as seen by the strong drop in basal Pmk1 phosphorylation detected in pck2⌬ cells compared with control cells (Fig. 5C) (16). However, pck1-A399E cells showed a marked increase in basal Pmk1 phosphorylation that was only partially attenuated in the absence of Pck2 (ϳ50% reduction in pck1-A399E pck2⌬ cells versus ϳ80% in pck2⌬ cells expressing wild-type Pck1) (Fig. 5C). Growth sensitivity to magnesium chloride mirrored the basal Pmk1 phosphorylation levels displayed by these mutant strains (Fig. 5D). Interestingly, Pck2-mediated activa-tion of Pmk1 in response to salt stress, which is independent of Pck1 function (16), was attenuated in pck1-A399E cells (Fig. 5E). Therefore, constitutive catalytic activation of Pck1 might interfere with proper downstream signaling of Pck2 to the MAPK Pmk1. Exponentially growing fission yeast cultures show 20 -25% of septated cells (Fig. 5, F and G, Control). Remarkably, Pck1-A399E cells showed strong cytokinesis defects with a notable increase in both septated and multiseptated cells (ϳ50% and ϳ5% of the total cell number, respectively; Fig. 5, F and G). Constitutive activation of Pmk1 appears to be responsible for this defect because it was mostly suppressed by simultaneous deletion of Pmk1 (Fig. 5, F and G). However, despite the functional relationship between constitutive Pck1 activity and Pmk1 function, overexpression of Pck1-A399E was lethal in either control or pmk1⌬ cells (Fig. 5H), suggesting that Pck1 can modulate morphogenesis and/or cell growth through both Pmk1-dependent and -independent mechanisms. Results representative of three independent experiments are shown. E, the growing strains described in B were treated with either 1 g/ml caspofungin or 0.6 M KCl. Total Pck2 levels were detected after incubation with anti-HA antibodies. Anti-cdc2 was used as a loading control. Results representative of two independent experiments are shown. F, the strains described in B were treated with either 1 g/ml caspofungin or 0.6 M KCl, and activated/total Pmk1 was detected as above. Relative units for Pmk1 phosphorylation (anti-phosho-p44/42 blot) were determined with respect to the internal control (anti-HA blot). Results representative of two independent experiments are shown. Pck1 is a main Rho1 effector during the control of cell growth and cell integrity signaling The essential Rho GTPase Rho1 is involved, together with Rho2, in the activation of the CIP during vegetative growth and in response to cell wall damage (12). Cells expressing the hypomorphic Rho1 allele rho1-596 show a thermosensitive phenotype and are hypersensitive to caspofungin (Fig. 6A) (24). Notably, both phenotypes were partially suppressed by the Pck1-A399E PS-mutated protein (Fig. 6A). The high percentage in septated and multiseptated cells present in the Pck1-A399E mutant (ϳ50% and ϳ5%, respectively) was alleviated in a rho1-596 pck1-A399E background (ϳ27% and ϳ0.5% of septated and multiseptated cells, respectively; Fig. 6B). On the contrary, rho1-596 thermosensitivity was aggravated by the simultaneous presence of the equivalent Pck2-A392E mutated protein (supplemental Fig. S2). The low total and Thr(P) 823 Pck1 levels present in rho1-596 cells during growth and in response to stress (KCl) were partly restored in a rho1-596 pck1-A399E double mutant strain (Fig. 6C). The increase in basal Pmk1 phosphorylation displayed by the rho1-596 hypomorphic allele (24) and pck1-A399E cells (Fig. 5C) 6D). Moreover, the enhanced Pmk1 activity displayed by pck1-A399E cells was strongly alleviated in a rho1-596 rho2⌬ double mutant compared with rho2⌬ cells (Fig. 6E). Altogether, these results support the functional relevance of activated Pck1 as a main Rho1 effector during the control of cell growth and cell integrity signaling. Catalytic activity is essential for Pck1 but not Pck2 function and promotes destabilization of both kinases To gain more insight into how catalytic activation of Pck1 and Pck2 affects their stability and functions, we generated strains expressing catalytically inactive Pck1-D789N and Pck2-D808N, in which the conserved catalytic aspartate was substituted by asparagine, thus maintaining the integrity of the ATP binding pocket (Fig. 7A) (25). Mammalian PKCs carrying this mutation are constitutively phosphorylated (primed) within the AL, TM, and HM as they become protected from dephosphorylation (25). Compared with control cells, total and Thr(P) 842 Pck2 levels were detected in cells carrying pck2-D808N as shifted/slower migrating bands reminiscent of increased phosphorylation (Fig. 7B), and the total amount of either Pck2-D808N or Pck2-A392E D808N was approximately twice that of control cells. Importantly, the difference in Thr 842 phosphorylation was negligible in both pck2-D808N cells and in a pck2-A392E D808N double mutant (Fig. 7B), suggesting that intrinsic catalytic activity is responsible for destabilization triggered after activation of Pck2. These mutants also showed very low basal Pmk1 phosphorylation (Fig. 7C). Moreover, Pck2-A392E D808N cells failed to activate Pmk1 in response to salt stress, confirming that catalytic activation of Pck2 is essential for this response (Fig. 7D). Similar to the pck2-D808N mutant, total and Thr(P) 823 Pck1 levels were present in pck1-D789N cells as multiple species with reduced electrophoretic mobility, and the total amount of either Pck1-D789N or Pck1-A399E D789N was approximately two times higher than in control cells (Fig. 7E). Notably, introduction of the D789N mutation in pck1-A399E suppressed both the increase in Pmk1 activity and the multiseptated phenotype shown by cells expressing the constitutively active PS mutant (Fig. 7, F and G). The negative effect of the D789N mutation was evident as the growth sensitivity to caspofungin of both pck1-D789N and pck1-A399E D789N cells was even higher than in pck1⌬ cells (Fig. 7H). However, contrary to pck2⌬ cells, pck2-D808N and pck2-A392E D808N cells were moderately growth-resistant in the presence of this compound (Fig. 7I). Therefore, although Pck1 functions appear to be strictly dependent upon its catalytic activity, inactive Pck2 is biologically functional to a certain extent. Our results also indicate that intrinsic kinase activity promotes destabilization of both Pck1 and Pck2. Discussion The PKC orthologs Pck1 and Pck2 share an essential role to modulate cell growth and morphogenesis in fission yeast (6,7). Taking into account their redundant functions and strong structural similarity in the regulatory and catalytic domains, it might be anticipated that the mechanisms responsible for catalytic activation and stabilization of both kinases should be identical. We found that, similar to Pck2 (14) and most PKC isoforms from higher eukaryotes (2,26), the conserved Thr 823 within the AL of Pck1 becomes phosphorylated in vivo by Ksg1, the fission yeast PDK ortholog. We also confirmed that, as in Pck2, TORC2 plays a major role to positively regulate Pck1 levels during growth and in response to most stresses. Notably, Pck1 up-regulation was clearly abrogated in tor1⌬ cells during cell wall stress or glucose starvation but only slightly limited during salt stress. The nature of the above stresses is very different, and therefore it is likely that this specific treatment was not of enough strength for the effect to manifest in a clear fashion. In any case, despite the above similarities, we provide compelling evidence that regulation of catalytic activation and stabilization of Pck1 and Pck2 has a remarkably distinct biological impact on both kinases (Fig. 8). The intracellular levels of most mammalian PKCs are strongly reduced in the absence of AL phosphorylation (2). For example, PKC␣ AL dephosphorylation decreases its sumoylation, which, in turn, promotes its ubiquitination and ultimately enhances its degradation via the ubiquitin-proteasome pathway (27). We observed that Ksg1-dependent in vivo AL phosphorylation is also a major mechanism controlling Pck1 stability and biological functions. In strong contrast, Pck2 protein levels, downstream signaling to the CIP, and biological functions are not significantly altered in the non-phosphorylatable AL mutant Pck2-T842A with respect to control cells (14). Moreover, although individual and/or combined mutations at Distinct regulation of PKC isoforms the putative in vivo phosphorylation TM and HM sites had an overall negative effect on Pck1 levels and function, this effect was not shown in the respective Pck2 mutant counterparts. Hence, fission yeast Pck1 resembles the majority of mammalian PKC family members, where maturation is dependent upon phosphorylation at conserved AL, TM, and HM sites within the catalytic domain, and converts newly synthesized kinases into a stable, degradation-resistant conformation (1). Our results also suggest that newly synthesized Pck1 is constitutively phosphorylated by PDK (Ksg1) at the AL residue (Thr 823 ) and that failure to do so down-regulates Pck2 signaling to the CIP. Although presently unknown, this putative interference mechanism might be due to PDK trapping by the unphosphorylatable pck1-T823A mutant and the ensuing defect in Pck2 activation. Con-trarily, the fact that the pck2-T842A mutant is stable and functional strongly suggest that, in wild-type cells, Pck2 might exist as both AL-phosphorylated and -unphosphorylated isoforms (Fig. 8). Mammalian PKCs bear a pseudosubstrate segment that blocks the substrate binding cavity and protects the priming AL, TM, and HM phosphosites within the catalytic domain from dephosphorylation and destabilization (1,20). This model predicts that deletion or a point mutation of the PS domain constitutively activates the kinase and elicits its dephosphorylation and degradation, as shown for several PKC isoforms. Introduction of the PS mutation in Pck2 (pck2-A392E cells) increased its function as upstream activator of the CIP during vegetative growth but, at the same time, promoted the destabi- lization of the kinase. The mechanism(s) responsible for Pck2 degradation might thus be similar to those present in mammalian PKCs. Remarkably, constitutive activation in the pck1-A392E PS mutant also enhanced its activity but, in contrast to Pck2, increased Pck1 phosphorylation and stability. Considering that the half-life of Pck1 is longer than that of Pck2, our observations depict a model where activated Pck2 might be prone to rapid dephosphorylation and degradation whereas Pck1 is not (Fig. 8). Nucleotide pocket occupation, but not intrinsic kinase activity, is necessary for PKC priming and maturation because kinase-inactive mutants that maintain the integrity of the ATP binding pocket are constitutively primed (25). Similarly, autophosphorylation does not compromise phosphorylation Distinct regulation of PKC isoforms and maturation of Pck1 and Pck2, as confirmed after introducing in both kinases the catalytic aspartate mutation (D789N and D808N mutants, respectively). Importantly, intrinsic catalytic activity is a determinant for destabilization triggered after constitutive activation of Pck2, as evidenced by the recovery in phosphorylated and total Pck2 levels by pck2-A392E D808N cells in comparison with the single pseudosubstrate pck2-A392E mutant cells. In addition, the use of fully primed catalytic aspartate mutants described above allows us to formally demonstrate that, although catalytic activity is essential for Pck1 functions, Pck2 remains partially functional in the absence of intrinsic kinase activity. Our findings suggest that the existence of biological functions without kinase activity represents a common theme within the extended PKC superfamily and might be attained early during evolution. All the structural and regulatory elements that appear to be distributed among the members of the large mammalian PKC family are present in Pkc1, the single and archetypal PKC enzyme present in S. cerevisiae (3). For the most part, this domain structure is similarly conserved in fission yeast Pck1 and Pck2 (Fig. 1A). Therefore, from an evolutionary perspective, these two functionally redundant kinases may represent a prime example in the expansion of the PKC superfamily from a common and single ancestor. Most importantly, our results strongly suggest that, in fission yeast, PKC duplication was accompanied by changes in the mechanisms that regulate catalytic activation and stability of the two kinase isoforms. The biological relevance of these distinct regulatory mechanisms is exemplified when MAPK Pmk1 activity is used as readout for Pck1 and Pck2-dependent downstream signaling. Strong Pmk1 activation in response to osmotic saline stress is a quick and transient event, does not require new protein synthesis, and is totally dependent on Pck2 catalytic activity (13,14). Indeed, Pck2 appears to be perfectly suited to perform this role because, contrary to Pck1, constitutive activation elicits its dephosphorylation and degradation, thus decreasing the magnitude of the signal at long incubation times. However, Pmk1 activation under cell wall stress takes place in a progressive manner until reaching its maximum at long incubation times and depends on the enhanced synthesis and activity of both Pck1 and Pck2 (12,14). In this situation, catalytic activation and induced stabilization of Pck1 might lead to a graded and robust downstream signaling to the MAPK module that increases with time. Differential regulation of Pck1 and Pck2 activation and/or stability may be important for the distinct roles of both kinases during cellular response to shortversus longer-term stress. We found that, compared with control cells, pck2⌬ cells exhibited a defective growth recovery phenotype after being subjected to severe thermal stress (55°C) during relatively short periods of time (supplemental Fig. S3). Notably, compared with control cells, we found that pck2⌬ cells exhibited a defective growth recovery phenotype in response to heat shock, whereas cells expressing AL (T842A) and catalytically inactive (D808N) Pck2 alleles showed a fairly good growth recovery under the same conditions (supplemental Fig. S3). On the other hand, the equivalent Pck1 mutants showed a similar and very modest growth defect compared with control cells (supplemental Fig. S3). These results reinforce the suggestion that both catalytically active and inactive Pck2 isoforms might have a more prominent role than Pck1 in promoting cell survival in response to short-term stresses. At the same time, Pck1 and Pck2 activation status must be tightly coordinated in vivo, as indicated by the observation that cells expressing the up-regulated pck1-A392E allele hyperactivate the CIP constitutively in a Pck2-independent fashion and interfere with proper downstream signaling to the MAPK cascade by Pck2. In higher eukaryotes, precise control of the amplitude of PKC signaling is essential for cellular homeostasis, and disruption of this control may lead to different pathophysiological states (28). Our results suggest that alternative regulation of the stability and PDK-mediated phosphorylation of both kinases emerged as major factors to allow for a precise control of PKC signaling during the early diversification of this large and functionally relevant class of enzymes. Strains, media, growth conditions, and gene disruption The S. pombe strains used in this work (supplemental Table S1) were derived from control strain MI200 and express a genomic Pmk1-HA fusion (11). They were grown in rich (YES (0.5% yeast extract, 2% glucose)) or minimal (EMM2) medium with 2% glucose plus supplements (29). Transformants expressing pREP3X-based plasmids were grown in liquid EMM2 medium with thiamine (5 mg/liter) and either plated in solid medium with or without the vitamin or transferred to EMM2 lacking thiamine. Gene fusion, site-directed mutagenesis, and expression plasmids To construct the integrative plasmid pJK148-Pck1:HA, the pck1 ϩ ORF plus regulatory sequences were amplified by PCR using fission yeast genomic DNA as a template and the 5Ј-oligonucleotide Pck1.XbaI-F (supplemental Table S2), which hybridizes 882-862 bp upstream of the pck1 ϩ ATG start codon and contains a XbaI site, and the 3Ј-oligonucleotide Pck1HASmaI-R, which hybridizes at the 3Ј end of pck1 ϩ ORF and incorporates a 64-nt sequence encoding one HA epitope (sequence GYPYDVPDYA) and a SmaI site. PCR fragments were digested with XbaI and SmaI and cloned into the integrative plasmid pJK148. The Pck1 ORF contains a NruI site that was deleted using the mutagenic 5Ј-oligonucleotide Pck1NruIX-F and the 3Ј-oligonucleotide Pck1NruIX-R with plasmid pJK148-Pck1:HA as a template. The mutagenized Pck1 sequence was digested with XbaI and SmaI and subcloned to generate pJK148-Pck1NruI⌬:HA. Integrative plasmids expressing HA-fused Pck1 mutants were obtained by site-directed mutagenesis PCR using plasmid pJK148-Pck1NruI⌬:HA as a template and the mutagenic oligonucleotide pairs described in supplemental Table S2. When confirmed, the mutagenized Pck1 sequences were subcloned into pJK148. The resulting integrative plasmids were digested at the unique NruI site within leu1 ϩ and transformed into pck1⌬ strain GB35 (supplemental Table S1). Transformants prototrophic for leucine were obtained, and the fusions were verified by both PCR and Western blot analysis. The integrative plasmids pJK148-Pck2.A392E:HA and pJK148-Pck2.D808N:HA were obtained by site-directed mutagenesis PCR using plasmid pJK148-Pck2NruI⌬:HA (14) as a template and the mutagenic oligonucleotide pairs described in supplemental Table S2. Mutant Pck1 overexpression constructs were obtained by site-directed mutagenesis PCR using plasmid pREP3X-pck1 ϩ (7) as a template and the correspondent mutagenic oligonucleotide pairs (supplemental Table S2). A GST-fused wild-type Pck1 construct was obtained by PCR employing a fission yeast cDNA library as a template and the oligonucleotides Pck1pKG-F (SmaI site) and Pck1pKG-R (XbaI site). The PCR product was then digested with SmaI and XbaI and cloned into plasmid pGEX-KG (30) to generate pGEX-KG-Pck1. Mutant Pck1 constructs were obtained by site-directed mutagenesis using pGEX-KG-Pck1 as a template and the cor-responding mutagenic oligonucleotide pairs (supplemental Table S2), digested with SmaI and XbaI, and cloned into pGEX-KG. Ksg1-GST fusion was obtained by PCR employing the oligonucleotide pair Ksg1pKG-F (SmaI site) and Ksg1pKG-F (NcoI site). A kinase-dead version of Ksg1 (K128R mutant) was obtained by site-directed mutagenesis with plasmid pGEX-KG-Ksg1 as a template (14) and the mutagenic oligonucleotides Ksg1K128R-F and Ksg1K128R-R (supplemental Table S2). Constructs were digested with SmaI and NcoI and cloned into pGEX-KG. Kinase assays In vitro kinase reactions were performed as described previously (31) with purified bacterially expressed GST-Ksg1 or GST-Ksg1-K128R (kinase-dead) as activating kinases and either wild-type or mutant GST-fused Pck1 constructs as substrates. GST-tagged fusions were detected with polyclonal goat anti-GST antibody conjugated to horseradish peroxidase (GE Healthcare) and the ECL system. Stress treatments and detection of activated Pmk1 Log-phase cell cultures (A 600 ϭ 0.5) were supplemented with either KCl (Sigma), caspofungin (Sigma), or Calcofluor white (Sigma). In glucose starvation experiments, cells grown in YES medium with 7% glucose were resuspended in the same medium lacking glucose and osmotically equilibrated with 3% glycerol. Preparation of cell extracts, purification of HA-tagged Pmk1 with nickel-nitrilotriacetic acid-agarose beads (Qiagen), and SDS-PAGE was performed as described previously (11). Dual phosphorylation in Pmk1 was detected with rabbit polyclonal anti-phospho-p44/42 (Cell Signaling Technology), whereas total Pmk1 was detected with mouse monoclonal anti-HA antibody (12CA5, Roche Molecular Biochemicals). Immunoreactive bands were revealed with anti-rabbit or antimouse HRP-conjugated secondary antibodies (Sigma) and the ECL system (Amersham Biosciences-Pharmacia). Detection of total and phosphorylated Pck1 and Pck2 Cell extracts were prepared using buffer IP (50 mM Tris-HCl (pH 7.5), 5 mM EDTA, 150 mM NaCl, 1 mM ␤-mercaptoethanol, 10% glycerol, 0.1 mM sodium orthovanadate, 1% Triton X-100, and protease inhibitors). Equal amounts of total protein were resolved in 6% SDS-PAGE gels and transferred to Hybond-ECL membranes. AL phosphorylation of Pck2 at Thr 842 was detected using a specific anti-Thr(P) 842 antibody as described previously (14). To detect AL phosphorylation of Pck1 at Thr 823 , an anti-phospho-polyclonal antibody was produced by immunization of rabbits with a synthetic phosphopeptide corresponding to residues surrounding Thr 823 of Pck2 (Gen-Script). Total Pck2 and Pck1 were detected with mouse monoclonal anti-HA antibody. Anti-PSTAIR (anti-Cdc2, Sigma) was used as a loading control. Quantification of Western blot experiments and reproducibility of results Quantification of Western blot signals was performed using ImageJ (32). Briefly, bands plus background were selected or drawn as rectangles, and a profile plot was obtained for each band (peaks). To minimize the background noise in the bands, each peak floating above the baseline of the corresponding profile plot was manually closed off using the straight-line tool. Finally, measurement of the closed peaks was performed with the wand tool. Relative units for Pmk1 activation were estimated by determining the signal ratio of the anti-phospho-p44/42 blot (activated Pmk1) with respect to the anti-HA blot (total Pmk1) at each time point. Relative units for phosphorylated and total Pck1/Pck2 levels were estimated by determining the signal ratio of either anti-phospho-P842 (AL-phosphorylated Pck2), anti-phospho-P823 (AL-phosphorylated Pck2), or anti-HA blot (total Pck2 or Pck1) with respect to the anti-cdc2 blot (internal control) at each time point. Unless otherwise stated, results shown correspond to experiments performed as biological triplicates. Mean relative units Ϯ S.D. and/or representative results are shown. The p values were analyzed by unpaired Student's t test. Plate assay of stress sensitivity for growth Decimal dilutions of S. pombe control and mutant strains were spotted per duplicate on YES solid medium or in the same medium supplemented with different concentrations of MgCl 2 (Sigma), FK506 (Alexis Biochemicals), Calcofluor white, or caspofungin. Plates were incubated at 28°C for 3 days and then photographed. Results representative of three independent experiments are shown in the corresponding figures. Fluorescence microscopy Calcofluor white was employed for cell wall/septum staining as described previously (18). Images were taken on a Leica DM 4000B fluorescence microscope with a ϫ100 objective and captured with a cooled Leica DC 300F camera and IM50 software. To determine the percentage of multiseptated cells, the number of septated cells was scored in each case (n Ն 400).
8,589
2017-05-23T00:00:00.000
[ "Biology" ]
Wireless light energy harvesting and communication in a waterproof GaN optoelectronic system Wireless technologies can be used to track and observe freely moving animals. InGaN/GaN light-emitting diodes (LEDs) allow for underwater optical wireless communication due to the small water attenuation in the blue-green spectrum region. GaN-based quantum well diodes can also harvest and detect light. Here, we report a monolithic GaN optoelectronic system (MGOS) that integrates an energy harvester, LED and SiO2/TiO2 distributed Bragg reflector (DBR) into a single chip. The DBR serves as waterproof layer as well as optical filter. The waterproof MGOS can operate in boiling water and ice without external interconnect circuits. The units transform coded information from an external light source into electrical energy and directly activate the LEDs for illumination and relaying light information. We demonstrate that our MGOS chips, when attached to Carassius auratus fish freely swimming in a water tank, simultaneously conduct wireless energy harvesting and light communication. Our devices could be useful for tracking, observation and interacting with aquatic animals. Results Profiles of waterproof MGOS. Combining the harvesting unit, LED and SiO 2 /TiO 2 DBR leads to a waterproof device architecture. Figure 1a schematically illustrates the fabrication process of the waterproof MGOS, wherein both harvesting units and LEDs share identical InGaN/GaN QW structures. Figure 1b shows an optical microscopy image of a MGOS consisting of eight cells into a single chip. The whole chip is 0.2 mm thick, 6.9 mm wide and 6.9 mm long, and its weight is 42 mg. Individual cell is composed of a harvesting unit and a 60 μm-diameter LED, and can work independently. As shown in Fig. 1c, each harvesting unit interconnects with the LED via metal wires. To achieve high open-circuit voltage (V oc ) and consequently power the micro-LED 39 , the two 1 × 2 mm 2 QWDs connected in a series have merged to form an energy harvester, which will convert coded light into electrical energy and signals through the photovoltaic effect. Figure 1d illustrates a cross-sectional scanning electron microscope image of a 2.28 μm-thick SiO 2 /TiO 2 DBR that is deposited on the device surface by using optical thin film coater. In order to achieve the desired reflectance spectra, the DBR has an inhomogeneous thickness distribution of the SiO 2 /TiO 2 pair. The SiO 2 /TiO 2 DBR has three distinct advantages: (i) it functions as an electrical isolation layer to form a waterproof architecture; (ii) it serves as an optical filter that separates the emitted light from the excited light to pass through the DBR; (iii) it operates as an optical reflector that reflects the incident light back into an upward direction to improve light absorption. Figure 2a shows the typical current-voltage (I-V) plots of the LED with/without environmental light, and the inset illustrates the emission image at a forward voltage of 2.1 V. The GaN QWD can absorb light photons to produce a photocurrent. The electroluminescence (EL) spectra of the QWD are plotted as a function of injection current in Fig. 2b. When the device operates as a light-emitting device, the light emission increases, and the dominant EL peak exhibits a slight blue shift from 523.9 to 520.2 nm with increasing injection current from 1 to 3 mA. On the other hand, the device can harvest energy from environmental light when it operates as an energy harvester. Furthermore, spectral emission-responsivity overlap endows the QWD with the capability to harvest, detect and modulate the light emitted by the device sharing an identical quantum well structure. Based on this intriguing spectral overlap, a variety of sophisticated GaN photonic circuits have been studied [40][41][42][43] , opening feasible routes to monolithically integrate GaN photonics with electronics on a tiny chip. Figure 2c shows the angle-resolved reflectivity spectra of the chip measured from the SiO 2 /TiO 2 DBR surface. Dark blue indicates the regions with low reflectivity, while dark red denotes the areas with high reflectivity. The SiO 2 /TiO 2 DBR exhibits uniform reflectivity for the incident angle from −40 o to 40 o . Compared with the device without the SiO 2 /TiO 2 DBR, the device allows its light emission but suppresses shorter-wavelength light traveling through the DBR, as shown in Fig. 2d. Figure 2e shows that one harvesting unit can generate a V oc of 3.72 V and a short-circuit current density (J sc ) of 1.13 mA/cm 2 with a photon conversion efficiency (PCE) of 2.32% under 1-sun illumination. The generated energy can be stored in a battery or supercapacitor. Figure 2f show that individual harvesting units can charge a 100 μF supercapacitor to a steady-state voltage of 3.72 V in 14.9 s. Wireless energy harvesting and light communications. As schematically illustrated in Fig. 3a, the MGOS can simultaneously achieve wireless energy harvesting and communication through light. The harvesting unit offers the coexistence of light energy-harvesting and light energy-sensing functionalities. Therefore, the unit converts information-coded light into electrical energy when the 405-nm laser pulses its light into coded signals. The harvested energy depends on the illumination power. As shown in Fig. 3b, when it doesn't interconnect with a and without (red curve with circles) external illumination, where the inset shows the light emission image biased at 2.1 V. b Measured electroluminescence and responsivity spectra of the GaN-based quantum well diode. c Angle-resolved reflectivity spectra measured from the SiO 2 /TiO 2 distributed Bragg reflector (DBR) surface. Dark blue indicates the regions with low reflectivity, while dark red denotes the areas with high reflectivity. d Comparison of normal reflectivity spectra for device with (black curve with circles) and without (red curve with circles) the SiO 2 /TiO 2 DBR. e Current density (black curve with circles) and power density (green curve with circles) versus voltage characteristics of individual harvesting units under one-sun illumination. V oc , open-circuit voltage; J sc , short-circuit current density; FF, filling factor; PCE, photon conversion efficiency. f Time-dependent voltage curves of a 100-μF supercapacitor charged by one harvesting unit, where the inset shows the schematic diagram of charging process. monolithically integrated LED, the harvesting unit produces steady signal outputs from 3.99 to 4.12 V at a modulation frequency of 5 kHz as the laser output power increases from 5 to 55 mW. Without external illumination, the harvested energy decays with time. As a result, the unit produces higher decay signal outputs as the modulation frequency increases. Figure 3c illustrates that the decay signal outputs are 1.99, 3.57, and 3.95 V at modulation frequencies of 500, 5000, and 50000 Hz, respectively, and a laser output power of 20 mW. When the harvesting unit is connected to the LED, the encoded power directly turns on the LED, thereby emitting information-coded light for both illumination and relay communication at the same time. The detailed energy harvesting performance is provided in Supplementary Figure 1, see Supplementary Information. As demonstrated in Fig. 3d, the harvesting unit can supply the LED with steady operating voltages from 2.33 to 2.78 V at a modulation frequency of 5 kHz as the laser output power increases from 5 to 55 mW. Figure 3e shows that the decay signal amplitudes of the LED increase from 1.46 to 1.91 V as the modulation frequency increases from 1 to 5 kHz at a laser output power of 20 mW. Furthermore, the external receiver can decode information-coded light from monolithically integrated LEDs for relay wireless light communication. An external photodiode captures this information-coded light and demodulates it to recover electrical signals, which are directly sent to a digital storage oscilloscope without additional preshaping or circuit amplification. Figure Since the harvesting unit is InGaN/GaN quantum well based, the device can harvest energy from short-wavelength light. Both power and information are delivered wirelessly through a 389.4nm flashlight beam with a full width at half maximum of 10.3 nm. The harvesting unit efficiently converts light energy into electricity to directly turn on the LED. Figure 4a shows an active MGOS consisting of eight harvesting units and eight LEDs on a single chip. The generated electricity can power all eight LEDs when we shine light on the chip. Compared to the excited light source, monolithically integrated LED can be placed in the target position for accurate illumination and consequently, the illumination area will be smaller. Additionally, the wavelength of the emitted light can be adjusted as needed. Since the MGOS has no energy storage unit, LEDs turn off when the external light source is switched off, as demonstrated in Supplementary Movie 1. Since the interconnected metal wires are protected by the SiO 2 /TiO 2 DBR layer, the MGOS is waterproof. Conventional wearable/ implantable devices are usually encapsulated using polydimethylsiloxane. Compared with polydimethylsiloxane, both sapphire substrate and SiO 2 /TiO 2 DBR films can sustain higher working temperature. To confirm it, the chip is put into a heat-proof beaker, and the water is heated using a heater. After maintaining the chip in 100 o C water for 1 h, all eight LEDs turn on when the beam of the 389.4 nm flashlight strikes the chip, as show in Fig. 4b. Supplementary Movie 2 demonstrates practical operation of the MGOS in 100 o C water. The boiling water produces air bubbles, and the MGOS can simultaneously achieve underwater energy harvesting and transfer through light. Moreover, the chip is put into water and frozen in a refrigerator with a temperature of −20 o C. As a result, the chip is trapped inside ice. Since ice is transparent, the light beam can pass through it and transfer optical energy and signals to the harvesting unit. After freezing the chip in −20 o C ice for 72 h, we use the 389.4 nm flashlight beam to activate the MGOS. Picture in Fig. 4c shows that all eight harvesting units can generate electricity from the incident light to directly cause all eight LEDs to turn on. Supplementary Movie 3 demonstrates practical operation of the MGOS frozen in −20 o C ice for 72 h. Figure 4d shows that one 2 × 2 wearable MGOS array is mounted on the abdominal skin of a living Carassius auratus. The total weight was approximately 200 mg, whereas the weight of the 2 × 2 MGOS array was 170 mg. Figure 4e shows an underwater wearable operational block diagram. The flashlight beam passes through water and strikes the MGOS attached to the body of a freely-swimming fish, achieving real-time identification and tracking of moving animals and studying its natural behavior. As illustrated in Fig. 4f, fish wearing these chips can freely swim in a water tank, and underwater light energy transfer and communication are successfully realized by shining the flashlight beam on these chips. Supplementary Movie 4 demonstrates simultaneous operation of wireless light harvesting and emission for all the 2 × 2 MGOS arrays when the flashlight beam passes through the water and strikes these chips, which are attached to a living fish using waterproof tape. Such chip that monolithically integrates a III-nitride transmitter, receiver, and energy harvester will be developed as new medical devices, which can be implanted into a living mouse brain for exploring optogenetics. Both energy and information are wireless transferred via light, and the power generated by energy harvester will turn on monolithically integrated transmitter for optical stimulation, and monolithically integrated transmitter can also detect optical response from the mouse. The critical challenge for this chip is how to monolithically integrate energy storage, driver and receiver into a single GaN chip. Recently, without involving re-growth or post-growth doping, we successfully integrated GaN metal-oxidesemiconductor field effect transistors (MOSFETs), transmitter, waveguide, and receiver into a single chip 38 Conclusions These results show that the MGOS can achieve wireless power transfer and communication through light, effectively addressing the challenge of underwater optoelectronics. The MGOS fabrication process is compatible with established GaN LED technologies and manufacturing tools, and a waterproof package is easily formed by depositing a SiO 2 /TiO 2 DBR passivation layer to protect metal wires among devices, making it possible to monolithically integrate large numbers of devices into a single chip in a cost-effective manner. These simple, miniaturized, wearable MGOS architectures enable robust multifunctional operation without complicated external circuits, thereby providing intriguing features in a wide variety of underwater applications. Methods Fabrication. Epitaxial films consisting of unintentionally doped GaN (u-GaN), Sidoped n-GaN, InGaN/GaN multiple quantum wells, and Mg-doped p-GaN were grown on a 4 inch patterned sapphire substrate by metal-organic chemical vapor deposition. The mesa regions were defined by photolithography and etched at a depth of 1.4 μm to expose the n-GaN surface. Inductively coupled plasma (ICP) etching was performed with a mixture of Cl 2 and BCl 3 . Deep ICP etching was further carried out to completely remove epitaxial films for device isolation. A 95 nm thick transparent indium tin oxide (ITO) current-spreading layer was deposited by sputtering, followed by rapid thermal annealing at 530°C in a N 2 atmosphere for 7 min. Subsequently, the ITO layer was patterned and etched away to expose the n-GaN surface using a mixture of HCl/FeCl 3 . Ni/Al/Ti/Pt/Ti/Pt/Au/ Ti/Pt/Ti metal stacks were deposited on the n-GaN and ITO surfaces, followed by a metal lift-off process and rapid thermal annealing. A 1000 nm-thick SiO 2 layer was deposited on the wafer by plasma-enhanced chemical vapor deposition. The electrode and bonding pad patterns were then photolithographically defined, and dry ICP etching was performed to etch the SiO 2 layer away with a mixture of SF 6 , CHF 3 and He. The Ni/Al/Ti/Pt/Ti/Pt/Au metal pads were then deposited by e-beam evaporation, followed by a metal lift-off process and rapid thermal annealing. The sapphire substrate was lapped and polished down to 200 μm, and the chips were finally diced by ultraviolet nanosecond laser micromachining. A waterproof device architecture formed after depositing a SiO 2 /TiO 2 DBR on the device surface. The fab factory has performed reliability/long-term (stability) experiments for the devices. Reflectivity spectra characterization. The spectral characterizations of the TiO 2 / SiO 2 DBR are performed by using the angle-resolved micro-reflection measurement system. A Bentham WLS100 quartz halogen lamp with a 200 μm-in-diameter fiber pigtail is used as the white light source. The incident light illuminates the sample with a circular light spot of~20 μm in diameter, and the reflected light is then collected by another fiber pigtail connecting to an Ocean Optics USB2000 spectrometer. To observe the optical image of the measured sample, a CCD camera is used in the measurement system. EL and responsivity spectra characterization. The responsivity spectrum was measured using the Oriel IQE−200B (Newport Corp), in which a Xenon lamp is used as the light source, and a calibrated reference detector provides reliable and repeatable calibration. For the EL measurement, the emitted light was coupled into a 200 μm-in-diameter multimode fiber by a lens system and fed to an Ocean Optics HR4000 spectrometer for characterization. Simultaneous energy transfer and communication through light. A 405-nm laser pulses its light with encoded-information to illuminate on the harvesting units, thereby achieving wirelessly energy transfer and communication simultaneously. An external Hamamatsu C12702-11 photodiode module detects the spatial light emission from monolithically integrated LED to convert the photons back into electrons through the micro-transmittance setup. The received signals are directly sent to an Agilent DSOS604A digital storage oscilloscope for characterization. Experimental model and subject details. Carassius auratus, 18-19 cm in body length were used in this study. The fish were kept in a water tank at room temperature. One 2 × 2 MGOS array with a total weight of 200 mg was mounted on the abdominal skin of a living Carassius auratus using waterproof tape. The flashlight beam passes through water and strikes the MGOS attached to the body of a freelyswimming fish, achieving real-time identification and tracking of moving animals and studying its natural behavior. The experiment strictly abides by the rules of animal experiments. The experiment mainly focused on observing the behavior of fish, and did not bring physical harm to fish. After the experiment, the fish were released in the school's lake on January 15, 2022. All the experiments in Carassius auratus were approved by the Animal Ethics and Welfare Committee of Nanjing University of Posts and telecommunications and were in line with animal protection, animal welfare and ethical principles, as well as the relevant provisions of the national experimental animal welfare. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Underlying data for the main manuscript figures is included as an excel file in source data.
3,880.8
2022-07-07T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Multi-lepton signatures at LHC from sneutrino dark matter We investigate multi-lepton LHC signals arising from an extension at the grand unification scale of the minimal supersymmetric standard model (MSSM) involving right-handed neutrino superfields. In this framework neutrinos have Dirac masses and the mixed sneutrinos are the lightest supersymmetric particles and hence the dark matter candidates. We analyze the model parameter space in which the sneutrino is a good dark matter particle and has a direct detection cross-section compatible with the LUX bound. Studying the supersymmetric mass spectrum of this region, we find several signatures relevant for LHC, which are distinct from the predictions of the MSSM with neutralino dark matter. For instance two opposite sign and different flavor leptons, three uncorrelated leptons and long-lived staus are the most representative. Simulating both the signal and expected background, we find that the multi-lepton signatures and the long-lived stau are in the reach of the future run of LHC with a luminosity of 100/fb. We point out that if one of these signatures is detected, it might be an indication of sneutrino dark matter. Introduction The Standard Model of particle physics has encountered a tremendous success, however it leaves many questions unanswered, such as the hierarchy problem, the origin of neutrino masses and the origin of a non-baryonic dark matter candidate. Several of these hints point toward the existence of new physics around the TeV scale and a very well motivated theoretical scenario is supersymmetry (SUSY) [1][2][3]. The constrained minimal supersymmetric standard model (MSSM) however has come to a critical point, firstly because of the null results of ATLAS and CMS on searches for supersymmetric partners of the standard model particles. Besides setting strong limits on the mass of the colored supersymmetric sector [4,5] the non observation of squark and gluinos has moved the experimentalist attention to the electroweak gaugino sector, where the LHC collaborations have now set constraints stronger than LEP searches [6,7]. Secondly, the recent discovery of the Higgs boson, with a mass around 126 GeV [8,9] has crucial importance for the MSSM, as such value requires large radiative corrections, JHEP04(2014)100 which scale as the logarithm of the supersymmetric masses, in particular with the stop masses. Consequently, the latter must be rather high, well above 1 TeV unless the stop sector has maximal mixing, thus suggesting that the mass scale of SUSY particles could be substantially higher than expected from fine-tuning arguments. This would also make very challenging, if not impossible, to detect SUSY at LHC in a direct or indirect way [10][11][12][13][14][15][16]. Prospects for detection remain interesting if some supersymmetric states are still sufficiently light, which in general implies to go beyond the constrained MSSM. There are several possibilities, such as non universal Higgs masses, non universal gaugino masses, which all ends up to some sort of split SUSY scenarios where a part of the spectrum is heavy and the rest is still in the reach of LHC. In general the electroweak SUSY fermions are at TeV scale, while the scalars are at much higher scale. The particle physics model we analyze here is a far less investigated extension of the MSSM focused on the sneutrino, the scalar super-partner of the left-handed neutrino, which plays the role of dark matter candidate instead of the usual neutralino. Since neutrinos have masses, as is now clearly understood by a host of independent and very robust experimental results and theoretical analyses [17,18], the motivation for considering this model is to study a natural and direct extension of the MSSM which contains terms in the supersymmetric lagrangian which can drive neutrino masses. The model we consider incorporates at the same time the new physics required to explain two basic problems of astroparticle physics: the origin of neutrino masses and the nature of dark matter. We do not attempt to be totally exhaustive on the type of supersymmetric model and we focus on the dark matter phenomenology of the sneutrino within the particular framework described below. Connections between neutrino physics and the phenomenology of sneutrino dark matter arise in general in supersymmetric see-saw models and have been discussed in e.g. [19][20][21][22][23][24][25][26][27][28][29][30][31]. The sneutrino as dark matter candidate is excluded in the MSSM, because it has a non-zero hypercharge. Indeed its coupling to the Z boson makes it annihilate too efficiently in the early Universe, hence its final relic abundance is lower than the value Ω DM h 2 measured by the Planck satellite [32]. Besides the problem of being under abundant, the direct detection is the most stringent limit for this candidate: the scattering cross-section off nucleus is mediated by Z boson exchange on t-channel, giving order to spin-independent (SI) cross-section of about 10 −39 cm 2 , value excluded already a decade ago for dark matter particles heavier than 10 GeV. The picture changes if we include in the MSSM a right-handed neutrino superfield (MSSM+RN from here on), which gives rise to Dirac neutrino masses. Being the theory supersymmetric then, the superfield contains as well a scalar right-handed field, the scalar neutrino rightÑ . This field, if at TeV scale, can mix with the left-handed partnerν L and make the sneutrino, mostly right-handed, a viable dark matter candidate [24,[33][34][35]. Pure right-handed sterile sneutrinos are viable dark matter candidates as well, as discussed e.g. in [36][37][38][39]. The phenomenology of the MSSM+RN model has been investigated in the framework of LHC constraints and direct detection in [40,41] and for indirect detection and cosmology in [24,39,42]. In this paper we review the status of the sneutrino as dark matter after the Higgs boson mass measurements, by exploring the SUSY parameter space with the soft breaking terms fixed at the grand unification (GUT) scale, however allowing for JHEP04(2014)100 non universal slepton and gaugino masses. We also assess the impact of the most recent exclusion bound from LUX [43] for dark matter direct searches. In this framework the colored particles will be mostly heavier than the scalar leptons and gauginos, so that we can satisfy the requirement of having a Higgs at 125 GeV. A scenario where the dark matter candidate is different than the standard neutralino and is linked to the neutrino physics is plausible and has interesting motivations. Therefore it would be worthy to improve the study on this model and the analysis of the distinctive signatures expected at colliders, which is the main motivation of our paper. Signatures at LHC from sneutrinos, arising from the strong production of squarks and gluinos, have been investigated in [34,44,45]. By exploiting the tight connection with the lepton sector, we instead focus on multi-lepton signatures that can arise from the sneutrino dark matter. We consider three peculiar signatures, which can be disentangled from MSSM standard scenario, based mainly on these signals: two opposite sign leptons with different flavor and three uncorrelated leptons. An efficient way of probing these signatures is via direct chargino production [46], as we will discuss in details. We run Monte Carlo simulated events followed by detector simulations for representative benchmarks that can arise in the MSSM+RN parameter space, in which the sneutrino is a good dark matter candidate. We discuss how these signatures can be detected and eventually distinguished with respect to the standard MSSM picture, whenever possible. Besides the multi-lepton final states, we consider signals coming from long-lived charged particles, in particular the lightest scalar tau mass eigenstate (τ − 1 ). In a corner of the MSSM+RN parameter space such particles can have life-time long enough to decay outside the detector volume or in the hadronic calorimeter, giving for instance disappearing tracks as signature. Interestingly we will show that all these signatures are connected to the dark matter relic density constraint: the annihilation processes to get Ω DM h 2 will fix the SUSY mass spectrum and hence the signals at collider. The rest of the paper is organized as follows. The supersymmetric model under investigation is described in section 2, while in section 3 we define our numerical analysis. In section 4 the parameter space of the MSSM+RN that leads to good dark matter candidates is detailed. Section 5 provides an in-depth discussion of the relevant signatures at collider from sneutrino dark matter and differences/similarities with respect to the standard MSSM framework, together with section 6. Finally we summarize our findings in section 7. In appendix A we discuss the prior choice and show the marginalized one dimensional posterior probability density functions for the parameters relevant for the dark matter phenomenology. Supersymmetric framework The MSSM+RN model we use has been defined in [22,24,33] and is similar to [35]. The superpotential for Dirac right-handed neutrino superfield, with the lepton violating term absent, is given by JHEP04(2014)100 where Y IJ ν is a matrix in flavor space (which we choose to be real and diagonal), from which the mass of neutrinos, of Dirac nature, are obtained, m I D = v u Y II ν . In the soft-breaking potential there are additional contributions due to the new scalar fields with µ being the renormalization scale, g 2 and g 1 the SU(2) and U(1) gauge couplings, Y t,τ the top and τ Yukawa respectively. Notice that the right-handed soft mass receives corrections only from the trilinear term, which affects as well the running of the left-handed part. This was already recognized in [35,44], but we report it here as well to set the basis of our analysis. Assuming negligible electron and muon Yukawas and keeping only the τ Yukawa Y τ in the RGEs leads toν 1e =ν 1µ andν 1τ to be the lightest sneutrino mass eigenstate and hence the LSP. Similarly for the heavier states we haveν e2 =ν µ2 andν τ 2 ≡ν 2 . From here on we drop the flavor index and consider the sneutrino dark matter to be constituted bỹ ν 1τ ≡ν 1 , unless stated otherwise. The relevant parameters at electroweak (EW) scale for the sneutrino sector are the two mass eigenstates mν 1 and mν 2 and the mixing angle θν, related to the Aν term via sin 2θν 3 Set up of the numerical analysis Parameters and methodology We study the MSSM+RN in the framework in which the soft parameters are considered non-universal at a high scale M X , where supersymmetry breaking is transmitted to the observable sector via gravity mediated mechanism. The model is defined by the following free parameters, whose initial values are understood to be fixed at the scale M X where the M i are the gaugino masses and m H denote the common entry for the two Higgs doublet masses, taken to be equal (m Hu = m H d ). The A L and A Q are the scalar trilinear couplings for the sleptons and squarks respectively. Finally µ and B are the mass term for the Higgs fields in the superpotential, equation (2.1), and the coefficient of the bilinear term in the soft scalar potential, equation (2.2). Since there are many free parameters, a random scan would turn out quite inefficient in exploring the parameter space, as it scales as n 2 , where n is the number of random variables. To accomplish an efficient sampling we adopt an approach based on Bayes' theorem where d are the data under consideration, L(d|θ i ) is the likelihood function, encoding how our model describes the data, π(θ i ) is the prior probability distribution function (pdf) associated to each parameter, and p(θ i |d) is the posterior pdf. The prior pdf is independent on the data and describes our belief on the value of the theoretical parameters, before the confrontation with the experimental results. All parameters are soft SUSY breaking terms, except µ: since they have a common origin it is reasonable to assume that they have similar size, and the initial conditions are given at the GUT scale, M X ∼ 10 16 GeV. We assume gaugino masses non-universality, allowing the three parameters to vary free within a similar range of values. The scalar masses are not unified, even though we still assume them to be common for all the three flavors. The parameter m N in general does not depend on the other mass parameters, in particular is not linked to m L , which instead is related to the charged slepton masses. On the same vein we consider m R independent as well. As the major focus of the model is the slepton sector, we consider one common soft mass term for all scalar quarks, m Q . Similarly we let free to vary the trilinear scalar couplings, which are taken to be equal for all flavors. For the charged sleptons we keep the usual alignment to the Yukawas, while for the sneutrino sector, the Aν term is let free to provide efficient mixing between the left and right component of the sneutrino. We use flat priors in the ranges defined in table 1. Usually a common choice for MSSM parameters is {tan β, sign(µ)}, which replaces the bilinear term and the Higgs mass term in the superpotential, {B, µ}, see expression in equation (3.1). To consistently pass from one set of parameters to the other we follow the prescription in [48,49]. This approach consists in taking M Z on the same foot as the rest of the experimental data and computes the Jacobian of the transformation in the parameter space, adding it consistently to the posterior pdf. The Jacobian factor has a beneficial impact of incorporating a fine-tuning penalization, giving low statistical weight to points with very large masses. 1 Constraints and observables Here we describe what are the constraints and observables implemented in our likelihood, as summarized in table 2. The dark matter phenomenology is constrained by requiring that the relic abundance matches the value measured by the Planck satellite [32] and that the sneutrino has a crosssection off nucleus below the LUX exclusion bound [43] mixing between left and right sneutrino fields, the lightest sneutrinoν 1 coupling with the Z boson is reduced by a factor sin θν. The averaged cross-section is given by where ξ ≡ min(Ω DM h 2 , Ωνh 2 ), µ n is the sneutrino-nucleon reduced mass, A (Z) is the mass (atomic number) of the nucleus. The couplings f n , f p to neutron and proton respectively are computed directly from the model parameters. We do not consider the uncertainties related to the pion nucleon sigma term σ πn , kept fixed to most recent value from lattice simulations [50], and we refer the interested reader to e.g. [51][52][53][54][55]. We average on the number of proton and neutron nucleons to extract the cross-section on Xe nucleus, which is then compared to the LUX exclusion limit. Ifν 1 is light enough to be produced in Z decay, its contribution to the Z invisible width is given by: where Γ ν = 166 MeV is the Z decay width into neutrinos. The quantity in equation (3.4) is well measured and can not be larger than 2 MeV [56]. On the same vein we require that light sneutrinos below the Higgs resonance, hence produced by Higgs decay, should not contribute more than 65% to its invisible decay width [57,58]. As far as it concerns the particle physics bounds, we require that the mass of the lightest CP-even Higgs satisfies the Higgs boson mass measured by both CMS [8] and ATLAS [9] collaborations. The value of the Higgs mass we use as observable is a statistical mean of both CMS and ATLAS measurements, as obtained in [16]. The charged slepton masses, mẽ ,μ , should be compatible with the mass lower bound from LEP, which occurs at JHEP04(2014)100 100 GeV [59]; a similar bound applies as well for the lightest chargino mass eigenstate. Thẽ τ − 1 has a somehow lower exclusion bound of 85 GeV, which comes from LEP measurement on the W boson decay width [59]. We are aware of the latest bounds on the gluino and squark masses from ATLAS [60] in simplified models, however we do not include them in our analysis due to complications in translating the bounds for a framework with a LSP of different nature than the neutralino. We also consider the constraints coming from the rare decays B → X s γ [61] and B 0 s → µ + µ − [62]. The full likelihood function is the product of the individual likelihood functions associated to an experimental result. For the quantities for which positive measurements have been made (as listed in the left part of table 2), we assume a Gaussian likelihood function with a variance given by combining the theoretical and experimental variances. For the observables for which only lower or upper limits are available we use a likelihood modelled as step function on the x% confidence level (CL) of the exclusion limit. On a practical level, the model has been implemented in FeynRules [63,64] (FR), by adding the appropriate term in the superpotential and in the soft SUSY breaking potential, following eqs. (2.2) and (2.1). We generate output files compatible with CalcHep in order to use the public code micrOMEGAS 3.2 [65] for the computation of the sneutrino relic density and elastic scattering cross-section. The FR package produces as well outputs compatible with the public code MadGraph5 [66] (MG5), which we use for the collider analysis at parton level. The input parameters are given in the SUSY Les Houches Accord 2 format [67]. The Monte Carlo simulation of the events make use of Pythia 6 [68] (as implemented within MadGraph5) for hadronization, as well as of the detector simulator Delphes 3 [69], with default ATLAS specifications. The supersymmetric particle spectrum is computed with the code SoftSusy, appropriately modified to adapt to micrOMEGAS 3.2. Finally the sampling of the parameter space is done with the code MultiNest v3.2 [70,71], which has the tolerance set to 0.5 and the number of live points to 4000. 2 The B-physics observables are computed by interfacing the program with SuperIso [72]. Sneutrinos as good dark matter candidates Instead of pursuing a full Bayesian analysis based on the posterior pdf, we use the equally weighted posterior sample. This sample contains points drawn randomly from the posterior pdf. More details about the sampling are given in appendix A, where we also comment on the impact of changing priors. Indeed the main interest of our analysis is firstly to find a correlation between the parameter space that leads to the good relic density and SI cross-section with the LHC signatures and secondly to obtain an efficient sampling of the parameter space. The result of the MultiNest run is illustrated in figure 1: in the left panel the crosssection σ SI n versus the sneutrino mass is shown and all points have a relic density compatible with the Planck measurement. We first note that there are not light sneutrinos with masses below the Higgs resonance around 63 GeV. As we adopt boundary conditions for the SUSY JHEP04(2014)100 parameters at the GUT scale, we did not find light sneutrinos as viable solution for the dark matter candidates, contrary to [24,35] in which the SUSY parameters are fixed at EW scale. In order to have a very light sneutrino of about 3-10 GeV with good relic density and not excluded by LUX, a very large Aν is required. In the RGEs the trilinear couplings affect the running of the scalar masses, hence a large value of the scalar trilinear term for the sneutrino induces instabilities and tachyonic solutions. 3 From the sampling, we highlight four regions, three of which have particular pattern in the SUSY mass spectrum. They all are relevant for LHC physics, giving rise to different signatures, as discussed in the next section, as well as they are characterized by different annihilation channels to achieve the relic density. On the Higgs pole (green points), the dominant role for attaining the correct relic abundance is played by the sneutrino itself via theν 1ν * 1 → ff annihilations mediated by the Higgs boson, as by definition of resonance region. As a consequence, the sneutrino mixing angle, shown as a function of mν in the right panel of figure 1, is fixed mainly by the requirement of being compatible with the LUX exclusion bound [43]. In order to suppress sufficiently the Z boson exchange on t-channel the sneutrino mixing can not be larger than 0.02. The rest of the SUSY spectrum is not directly constrained by relic density requirements and does not follow a particular pattern. The gaugino sector can be lighter than the scalar lepton sector or vice versa, in other words the Higgs resonance region contains a rich LHC phenomenology. JHEP04(2014)100 Another region, denoted by the orange points, has the characteristic of having longlivedτ − 1 . In particular the mass splitting between the sneutrino and the scalar tau, which is the NLSP (next to lightest SUSY particle) is smaller than 1 GeV. We discuss the details of theτ − 1 decay and life-time in section 5.1. Here we comment on the SUSY spectrum, which is similar to sort of split SUSY scenario, in the sense that the LSP and NLSP are at an energy scale still in reach of LHC, while all the other SUSY particles will remain elusive, above 1 TeV. The correct relic density forν 1 is achieved in two ways. When the sneutrinos have a small left-handed component, larger than about 0.02 though, the dominant annihilation channels areν 1ν * 1 → W + W − , ZZ, hh, tt and coannihilation with theτ − 1 is also relevant, such asν 1τ − 1 → ZW − , hW − . However the more right-handed the sneutrino becomes, the more the annihilation channelsτ + The magenta points denote the region where sneutrino and neutralino are close in mass within 10%, and the neutralino is mostly bino. The relic density in this case is fixed only by sneutrino annihilation, via these dominant processes:ν 1ν * 1 → W + W − , hh, ZZ and ν 1ν * 1 → tt, whenever the top threshold is opened. The mixing angle should be still sizable, around 0.02-0.04, to provide Ωνh 2 in accord with the measured value. On the contrary, the blue points denote the parameter space where the sneutrino is degenerate with the neutralino at the level of 10% and in addition with the chargino, at the same percentage level, that is neutralinos and charginos are either winos or higgsinos. Two possibilities for the relic density can arise. First, if the mass spectrum of theχ 0 1 ,χ 0 2 andχ + 1 is compressed, coannihilation is crucial for fixing the correct relic density. For instance it can involve a large number of annihilation processes such asχ 0 The second typology arises if the chargino is almost degenerate with the neutralino within 1-2%. In this case the relic density is driven only byχ + 1 andχ 0 1 coannihilation. In both cases the contribution of sneutrinos to Ω DM h 2 is irrelevant. This is the reason why the sneutrinos in this region can be almost purely right-handed and highly elusive for dark matter direct detection. This behavior seems also to be the common one for heavy sneutrinos. In the rest of the sampling, gray points, there isn't a particular pattern for SUSY mass spectrum. The correct sneutrino relic density is achieved by the sneutrino annihilating mainly via the s-channel exchange of a Z boson (from here on called Z-boson region) into W − W + , tt, ff and via t-channel neutralino and chargino exchange going into ff . The mixing angle exhibits a sizable component of left-handed component, as shown in figure 1. The values around 0.04 are a good compromise between achieving Ω DM h 2 and being compatible with the direct detection bounds. A large part of the sneutrino dark matter parameter space can be probed by next generation of dark matter experiments, such as XENON1T [73], which is denoted by the black dashed line in figure 1. The effect of the small Y τ in the mass spectrum makes the electron and muon sneutrino slightly heavier thanν 1 ; they will eventually decay into the LSP, with a process mediated most likely by the off-shell lightest neutralino (ν e,µ → ν e,µχ 0 * 1 → ν e,µν1 ν τ ) and producing only neutrinos plus the LSP. However this process might be very suppressed by the splitting in mass between sneutrino flavors, so that theν e ,ν µ can be long-lived, with a life-time that can range from 10 −4 s up to the age of the Universe. We do not discuss further the Table 3. Relevant SUSY breaking parameters at electroweak scale characterizing the four benchmarks used for simulating events at LHC, as labelled. The benchmarks are defined as follows: B1 is for the long -livedτ − 1 , B2 for the two same sign leptons, B3 for multi-leptons and finally B4 for direct chargino production. Collider signatures We have chosen four benchmark points, which are representative of the rich LHC phenomenology of sneutrino dark matter. The relevant SUSY breaking parameters at the electroweak scale are summarized in table 3. For the analysis we used a center of mass energy of 14 TeV and assumed a luminosity L = 100/ fb. Long-livedτ Long-lived charged particles at LHC have been studied in the MSSM or in models beyond the standard model from a theoretical point of view (see e.g. [40,74,75]) and are searched in depth experimentally (see e.g. [76,77]). As anticipated in the previous section, and discussed in [78], in the MSSM+RN framework the long-lived particle is typically aτ − 1 . JHEP04(2014)100 The mass matrix of the scalar τ is where the terms D L,R stand for the corrections to the soft masses arising from RGEs. The left-handed soft breaking mass m L is the only SUSY term common with the sneutrino. As the τ Yukawa is non negligible, the off-diagonal term is large and induces a mixing between left and right component as If theτ − 1 is the NSLP and the splitting in mass with the LSP is δm/mν < 10% (δm ≡ mτ− 1 − mν), it contributes to the sneutrino relic density as coannihilation processes become relevant. In general the requirement of coannihilation implies that the splitting in mass is much smaller than the W boson mass and theτ − 1 decays into the LSP only via three-body process, illustrated in figure 2 (left panel). The smaller the mass splitting the smaller is the decay width because of the suppression in the phase-space, leading eventually to long-lived τ − 1 for δm < 1 GeV. However the life-time does not only depends on δm, but on both the mixing angles θτ , θν and on the overall mass scale mν. We discuss the long-livedτ − 1 phenomenology using the benchmark point B1, given in table 3, in which This is representative of our sampling of long-lived scalar τ depicted by the orange points (figure 1). In B1 bothν 1 andτ − 1 are mostly right-handed, but the sneutrino is so sterile that theτ − 1 annihilation alone sets the relic density. The degeneracy between sneutrino andτ − 1 is 'accidental': it not entirely due to the left-handed mass 4 m L but also to RGEs effects. Searches for long-lived charged particles have excludedτ − 1 < 300 GeV [77], when produced directly in the pp collision, hence we consider only relatively heavyτ − 1 . This is the reason why the orange points are present only for heavy sneutrinos with mass larger than about 400 GeV. Figure 3 shows theτ − 1 life-time as a function of δm (solid magenta line). Below the W threshold, ττ 1 is a smoothly increasing function of δm, because it produces an off-shell W , which then decays into on-shell fermions/quarks asτ − 1 → W − * ν 1 →ν 1 f f . As far as δm goes below a certain mass threshold, for instance τ or µ, the decay is suppressed by the reduced number of decay possibilities but is still a three-body decay, hence there are no visible kinks. This is in contrast with the MSSM picture: for aτ − 1 NLSP, its decay into JHEP04(2014)100 ionization as they travel through the detector. The search is performed by measuring the specific ionization energy loss and the time-of-flight. As these particles travel with velocity β = v/c measurably lower than the speed of light, they can be identified and their mass determined via the relation m = p T /(γβ) [76,77], with γ being the Lorentz factor. (ii) d ∼ O(100) mm: if the charged long-lived particles decay inside the hadronic calorimeter, they could be detected by tracks that appear to have few associated hits [80,81]. (iii) d ∼ O(10) mm: if the long-lived particle is neutral and decays, it gives rise to a displaced vertex. This search characterizes mostly SUSY R-parity violating scenarios and is not relevant for our purposes. We have simulated Monte Carlo events for the benchmark B1, assuming direct production of a pair ofτ − 1 's via electroweak Drell-Yan process. The production cross-section is σ = 8.23 × 10 −5 pb. We operated cuts similar to the search type (ii) to estimate what is the number of long-livedτ − 1 that can be detected. 3. To account for the detector simulation and background subtraction, we convoluted the signal after cuts with the efficiency in detecting charged tracks given in [77], which is = 0.15 for detecting one track and = 0.2 for detecting both tracks. The background for escaping charged tracks depends on the type of search. For (i) it is mostly composed of high p T muons with mis-measured velocity and is data driven, while for (ii), namely disappearing high p T tracks, it consists of charged hadrons (mostly pions) interacting in the hadronic calorimeter or low p T charged particles whose p T is badly measured. We are however not simulating the background and use the criterium 3. to take into account its effect. The result for B1 is illustrated in figure 4, where we plot the number of tracks for one detected heavy long-livedτ − 1 (left panel) and for detecting both long-lived chargedτ − 1 (right panel), that have been produced in pair. The light-gray region is the number of tracks before applying the cuts described above. Interestingly we see that several events can be measured and most likely they will leave the detector, while only a couple of events are expected to decay inside the hadronic calorimeter. This is due to the fact that theτ − 1 is very massive. Notice that the efficiency for detecting both charged tracks is higher due to less background. If a long-livedτ − 1 would be detected, could we distinguish between the MSSM and the MSSM+RN scenarios (figure 4)? It would be tricky to disentangle the two scenarios if the long-lived particle decays inside the hadronic calorimeter, however if it decays outside the detector volume and the time-of-flight can be measured, it would be possible to reconstruct JHEP04(2014)100 its mass by knowing the p T and have some hints if the LSP is a neutralino or a sneutrino. A long-lived stau is a signature of sneutrino dark matter and can be retrieved in other models, such as pure right-handed sneutrinos [82]. Two same sign leptons The possibility of having sleptons lighter than neutralinos is an interesting feature of the phenomenology of MSSM+RN specially for collider signatures, since this mass hierarchy could lead to potentially powerful signatures to test sneutrino as LSP at LHC. Let's start describing the phenomenology of sleptons in the MSSM+RN. As it was mentioned above, the initial parameters are set at the gauge coupling unification scale M X , where SUSY breaking is transmitted to the observable sector. When solving the RGEs from M X to the EW scale, correlations are printed in the mass spectrum. Combining this effect with the requirement of sneutrinos to be good dark matter candidates, one can be able to understand the typical mass spectrum and therefore study potential experimental signatures. In the framework described in section 2 the stau is the lightest slepton due to the contribution of the tau Yukawa in the RGE. In addition, depending on the value of tan β and the trilinear term, the splitting in mass between the lightest stau and the other sleptons could increase. On the other hand, for the first and second generation of sleptons, typically the left component is heavier that the lightest neutralino mainly because its RGE has a term proportional to M 2 1 (bino mass) and another one to M 2 2 (wino mass). A second and major reason for this mass hierarchy is related with dark matter, as explained in the previous section: to have a good sneutrino candidate, m L is pushed to large values in order to have a mostly right-handed lightest sneutrino. The case of right-handed slepton is different: its coupling and mass are not constrained by the requirement of having a sneutrino as a dark matter candidate and its RGE receives contribution from M 2 1 but not from M 2 2 . As a consequence,l R is typically lighter than the left-handed slepton. Figure 5 JHEP04(2014)100 shows the relation between the chargino and the left-handed sleptons and with the staus. As expected, the region with lightτ − 1 is significantly larger than the region accouting for light left-handed sleptons. We find as well thatl R are lighter thanl L , however we do not show them as we will not use right-handed scalar leptons in the signatures, as explained more in details in section 6.1. The relation between sleptons and charginos is relevant for collider phenomenology because the dominant production process of electroweak particles is through chargino and neutralino production, via their higgsino and wino components. For that reason we only show the region with charginos lighter than 900 GeV. It was pointed out by [83] that as a consequence of requiring a sneutrino LSP, the chargino-neutralino production will have a final state with three leptons and missing energy where the two opposite sign leptons could have different flavors (since sleptons decay through a W boson). This is a very distinctive signature of sneutrino dark matter with respect to neutralino dark matter. Remember that in the MSSM the chargino-neutralino production will give a signal of three leptons but the two opposite sign leptons will have necessarily same flavor, as they are coming from the Z boson or from the neutralino decay through sleptons. We instead focus in the region where the NLSP is the lightest stau,τ − 1 , so that these signatures are present in all the regions where a slepton is lighter than the chargino in the gray/magenta points in figure 1. We consider the process depicted in figure 6: . To study this signature in more detail we use benchmark B2 described in table 3. The relevant masses and mixings are Figure 6. Relevant process for the two-lepton signal: chargino-neutralino production and subsequent decay through the lightest stau. Process BR with branching ratios given in table 4. Notice that the dominant branching ratio is into sneutrinos and leptons. The signal we consider here is slightly different from the one assumed in [83], as the final state in 5.4 contains two same sign leptons, however the third one is a hadronic τ , which due to the low efficiency in its identification and simulation we are not tagging. For the background we consider the production of W Z → W l + l − and ttW . The cross-sections are computed at LO with MG5 and Pythia 6. 99.99% The following cuts are applied, 1. Two same sign different flavor leptons with p T > 20 GeV and pseudo-rapidity η < 2.5; 2. At least one lepton with p T > 25 GeV; The kinematical variables considered in the analysis are p miss T and the invariant mass of the two selected leptons (M inv ). Figure 7 shows in the left panel the p miss T distribution and in the right panel the M inv distribution. Central panels show the signal distribution only. As one can see, the signal is accumulated at small values of p miss T and M inv where the background reaches its maximum value. In this region statistical and systematics errors are typically very small and therefore even when the ratio between signal and background is small the signal could be distinguished. The smallness of the ratio signal over background is illustrated in the lower panels. Of course, NLO contribution have to be included and the cuts have to be optimized to get an accurate idea of how the signal will stand over the background. Multi-leptons The phenomenology of the Higgs resonance region is potentially powerful to detect supersymmetry, because of the particular collider signatures that can arise. The composition of JHEP04(2014)100 Figure 8. Typical chargino and neutralino decay chains for the Higgs resonance region. the sneutrino requires generally a very small left-handed component with respect to most of the points in other regions, see figure 1. This implies that the splitting between m L and m N , and therefore between mν 2 and mν 1 , tends to be very large. Howeverν 1 with mass of about 63 GeV allows the rest of the mass spectrum to have low mass values as well. For instance m L does not need to be at about O(1) TeV to make the mν 1 to be right enough to be a good dark matter candidate. On the other hand, sinceν 1 andν 2 are almost pure right-and left-handed states respectively, charginos and neutralinos couple very weakly toν 1 and prefer therefore to decay toν 2 rather than to the lightest stau orν 1 , as shown in table 5 for the benchmark B3 as an example. Let us focus in the process Notice that we consider the decay ofν 2 toχ 0 1 ν, which is shown in figure 8. There is however another possibility,ν 2 decaying to Higgs andν 1 . This possibility is interesting since the coupling h −ν −ν is also the relevant one in the sneutrino annihilation in the early Universe. The study of this second possibility goes beyond the scope of this work, since it will not give rise to a distinguishable leptonic signature. The process shown in 5.6 contains two W bosons at the end of the decay chain. Considering the W leptonic decay to electrons and muons, the final state is given by three leptons not correlated in sign and flavor and two taus. To study this signature in more detail we use benchmark B3 described in table 3, with relevant masses and couplings and branching ratios summarized in table 5. In order to single out the most distinctive signature from the final state in process (5.6), we require three electrons or muons but neglect events with opposite sign same flavor JHEP04(2014)100 Table 5. Relevant branching ratios for decays in benchmark B3 for the three uncorrelated lepton signature. leptons. This condition could be relaxed allowing opposite sign same flavor leptons but forbidding the ones with invariant mass close to the Z boson mass. For the background we consider W Z → W l + l − and ttW . As in the previous case, W Z is simulated with MG5 and Pythia 8 and ttW using MG5 and Pythia 6. Selected events are required to pass the following cuts 1. Three leptons with p T > 20 GeV and η < 2.5; 2. Events with opposite sign same flavor (OSSF) leptons are forbidden; 3. When OSSF events are allowed we require them to have |M inv − M Z | < 10 GeV; 4. At least one lepton with p T > 25 GeV; 5. E miss T > 100 GeV. Figure 9 shows the missing transverse momentum distribution in the case when we allow (left) and forbid (right) OSSF leptons. As one can see, the ratio between signal and background is remarkable good (lower panels). Indeed the background for three uncorrelated leptons is very small and the signal stands well above it and is in the full reach of LHC at 14 TeV. Direct chargino production In the previous subsection we have shown that the sneutrino as LSP and the sleptons as NLSPs could have quite particular signatures of leptons without correlation of sign and flavor. However there is a significant region in the parameter space where sleptons are heavier than some of the neutralinos and charginos (see figure 5). These regions have more "traditional" signatures but still exhibit some particularities with respect to the MSSM. Direct chargino production could be a window to access these regions. The difference between the MSSM+RN with respect to the MSSM is that the chargino decay chain could be dominantly into two-body (χ ± 1 → l ±ν l ) instead of three-body (χ ± 1 → W ±χ0 1 → f fχ 0 1 ), producing a sharper distribution in the signal. We focus on direct chargino production, depicted in figure 10, with a final state of two leptons and missing transverse momentum This signal (two opposite sign leptons and two invisible particles ) also exists in the MSSM, arising from direct production of slepton (l ± → l ±χ0 1 ) but with smaller production crosssection. This search will access a large portion of the parameter space, specially if the lightest neutralino is higgsino or wino and therefore quasi-degenerated with the chargino, but also in most of the cases of bino-like lightest neutralino region. In order to get good efficiency in these searches we require a large enough splitting in mass between theχ 0 1 and theν 1 to be able to detect the decay of the chargino. This condition is recovered in the region where the sneutrino annihilates efficiently through the Z boson to satisfy dark matter constraints (gray points) and in the region whereν 1 is degenerated with the neutralino but not withχ ± 1 (magenta points). For the study of this signature we consider the benchmark B4 with parameters at EW scale described in table 3. The chargino is mostly wino and the relevant masses and mixing angles are given by The branching ratios are shown in table 6: notice that the decay into the LSP has the largest branching ratio, and this is a general situation for most of the sneutrino parameter space we consider (gray points, with sizable left-handed component, see figure 1). For background simulation we consider W + W − and W Z production at leading order including also the case where one of the gauge bosons is off-shell. In the analysis we use two kinematical variables, m T 2 [84] where l 1 and l 2 correspond to the two leptons we require in this analysis and M T is the transverse mass. We also use the effective transverse energy [85], where M ll inv and p ll T are the invariant mass and transverse momentum of the two selected leptons. Keep in mind that m T 2 and E eff T are expected to have a distribution with a maximum at the mass of the chargino for m T 2 and twice this value for E eff T . JHEP04(2014)100 Taking [7] as a reference, we consider the following cuts 5. Second hardest jet with p T < 50 GeV. Figure 11 shows the signal superposed with the background for m T 2 and E eff T , central panels show the signal only. Notice that the maximum of the signal is located around 400 GeV for m T 2 (and 800 GeV for E eff T ) where the background has decreased significantly, allowing us to disentangle the signal from the background. This is confirmed by the signal over background ratio in the lower panels. In case LHC measures this kind of signal it would be possible to estimate the size of the supersymmetric masses when combining m T 2 and E eff T . Colored particles communicate with the sneutrinos through neutralinos and charginos, making this signature general and very interesting. In this case however the signal will be associated to multi-jets as well. Hence the gluino and squark production is less promising for this type of search because of the huge expected background. 6 Discussion on other potential signatures 6.1 Multi-leptons froml R three-body decay An interesting phenomenological region is denoted by right-handed sleptons lighter than both the left-handed sleptons and the lightest neutralinoχ 0 1 . Staus are typically lighter that selectrons and smuons, as discussed before. The right-handed selectrons and smuons will decay through three-bodyl ± R → l ± ν l ν l , (6.1) The second case of (6.1), wherel R decays toτ 1 , is a particular signature, as the final state will lead to three uncorrelated leptons in flavor, depicted in figure 12. We do not simulate this signature since in our data sample right-handed sleptons lighter that neutralinos tend to be heavier than 700 GeV (gray points). Hence this signal will be suppressed because the cross-section for direct production falls down steeply for increasing slepton mass and is beyond LHC reach. However we point out that this could be a very interesting signature when associated with production of colored particles decaying to neutralino and the neutralino consequently into slepton right. configuration for the SUSY mass spectrum is essentially MSSM like, the only exception being that the LSP is now theν 1 . In this case we do not expect signatures that differ significantly from the MSSM predictions. If the decay chain ends up with a neutralino, the processχ 0 1 →ν 1 ν τ will be completely invisible, hence the nature of the dark matter can not be determined. Another possibility to disentangle the MSSM and MSSM+RN scenarios is given by the chargino decay. There are scenarios in which theχ 0 1 andχ + are degenerate, due to their composition, and the splitting in mass can be smaller than few %, such that charginos can be long-lived particles, e.g. in anomaly mediated supersymmetric breaking scenarios [80]. On the other hand, when the sneutrino is the dark matter, it is enough for relic density constraint to have charginos within 10% in mass degenerate with the sneutrino, so typically the charginos will not be long-lived. Conclusions In this paper we have investigated distinctive leptonic signatures at colliders of the simplest extension of the MSSM in which neutrinos have Dirac masses (MSSM+RN), motivated by the fact that neutrino are massive. The connection with neutrino masses has a significant impact on the nature of the LSP and dark matter candidate. With the addition of righthanded neutrino superfield, the phenomenology of the scalar neutrino is modified as well: the left-handed component and right-handed component can substantially mix because of large trilinear scalar coupling Aν (which are not related to the small neutrino Yukawa coupling) and becomes the dark matter candidate. Assuming the SUSY parameters unified at GUT scale, we revise the status of sneutrino dark matter, finding that it is a viable candidate for masses above the Higgs pole, and that a large portion of the parameter space is compatible with the exclusion limit of LUX and can be probed by the future direct detection experiment XENON1T. We have found that there is a correlation between the annihilation channels that fix the LSP relic density and the signatures at LHC. In some regions, as the Higgs pole or when bino-neutralino and sneutrino are degenerate, the sleptons might be lighter than the electroweak fermions, leading to interesting features. The most promising signatures of sneutrino dark matter are (i) decay into two leptons of opposite sign but uncorrelated flavor and (ii) three uncorrelated leptons, which have a negligible standard model background. A higher number of leptons (∼ 4l) in the final state is also expected, even though such signature is suppressed by the long decay JHEP04(2014)100 chain. Simulated Monte Carlo events for both signals and backgrounds have been used to assess the experimental sensitivity to specific benchmark points, representative of generic configurations arising in the MSSM+RN. We have pointed out that the signal is in the reach of LHC at 14 TeV of center of mass energy and 100/ fb of luminosity. Interestingly, some configurations of the MSSM+RN parameter space lead to long-lived staus, which can be detected by next LHC run and can as well provide a hint on the nature of the dark matter, if ever detected. Indeed the life-time of theτ − 1 in the MSSM and MSSM+RN has a different behavior as a function of the mass splitting with the LSP. The anomalous production of events with 4 leptons recently observed by the CMS collaboration [86] has started to increase the interest towards multi-lepton signatures [87]. In this paper we have proposed interesting leptonic signatures from a motivated extension of the MSSM, which can be probed by future LHC run with the appropriate search strategies and by future astroparticle experiments in a complementary way. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
11,547.2
2014-04-01T00:00:00.000
[ "Physics" ]
Ultra-Broadband Absorption from 750.0 nm to 5351.6 nm in a Novel Grating Based on SiO2-Fe-Sandwich Substrate In this paper, we design a three-part-period grating based on alternating Fe/SiO2 sandwich structure, which can achieve an ultra-broadband absorption from 750.0 nm to 5351.6 nm. In particular, the absorbing efficiency can reach to more than 95% within 2158.8 nm, which is due to the well impedance matching of Fe with the free space, as well as due to the excitation of localized surface plasmon resonance and surface propagation plasmon resonance in the proposed structure. Furthermore, multiple period gratings are also discussed to broaden the absorption band. These results are very promising for applications in high-performance photovoltaics, nonlinear optics devices and protective equipment for laser weapons. Introduction As a fundamental optical process, absorption for electromagnetic wave plays a significantly important role in a wide variety of applications such as photovoltaics [1][2][3], surface enhanced Raman spectroscopy [4], plasmonic sensors [5][6][7], nonlinear optics and spectral filters [8,9]. In the past decade, it has been shown both theoretically and experimentally that absorbers based on plasmonic nanostructures and metamaterials can achieve spectrally selective absorption bands from the visible to the microwave band with a near-perfect efficiency resulting from the resonances, which also show merits such as low polarization and angle dependencies [10][11][12][13][14]. However, even if unity absorption can be realized at the resonance frequencies by using specific building blocks of certain geometries, actual structures still suffer from the limitation of absorbing bandwidth due to the narrow band resonance. A common solution for broadening the absorption band is via designing complex or multilayer structures that aim to overlap multiple resonances at the neighboring frequencies [15][16][17], and expect to result in complicated crafts and increased manufacturing costs. In this case, it is still a challenge to achieve an effective absorber with ultra-broadband absorption as well as simple manufacture. In this letter, we have designed and numerically investigated a novel absorber with an ultra-broadband absorption from 750.0 nm to 5351.6 nm. It is realized by a three-part-period grating based on the SiO 2 -Fe-sandwich substrate (SFSS). Compared with the most commonly used metals such as Au, Fe can be much more beneficial for achieving the impedance matching with free space over a broad frequency band [18]. Meanwhile, the number of Fe layers, as well as thicknesses of Fe and SiO 2 layers, are discussed to enlarge the multiple propagation surface plasmon (PSP) resonances in adjacent SiO 2 layers and then improve the absorbing capacity in visible and near-infrared regions. Furthermore, the filling factors of three-part-period grating are optimized to enhance the localized surface plasmon (LSP) resonance and improve the absorption in mid-infrared, which is analytically demonstrated with the grating Fourier harmonics model. Thus, these properties clearly indicate our proposed structure as a perfect choice for ultra-broadband absorber in applications. Design for Ordinary Grating Based on SFSS To begin with, as schematically shown in Figure 1a,b, we discuss the absorption performance of the ordinary grating placed on a thin SiO 2 layer which is supported by the SiO 2 -metal-sandwich substrate. The period width Λ is chosen to ensure that the proposed structure is a sub-wavelength device for the operating wavelength. Define the filling factors of the silicon grating and free space gap as f a and f b , respectively, and the geometrical height of grating as h 1 . In Figure 1a, there is only one layer metal with thickness t 1 which exists in the multiple substrate. The thicknesses of upper SiO 2 layer and bottom SiO 2 layer are defined as h 2 and h 3 , respectively. As the incident light illuminates on the structure, the optical response of metal surface can be easily understood by examining its dielectric function, which, as a good approximation, can be described by the Drude model [19]: where ω is the angular frequency of the radiation, ω p is the plasma frequency, and τ is the relaxation time of the electrons. Below ω p , the real part of ε is negative and only evanescent waves are allowed in the metal. The penetration of light into the metal is characterized by the skin depth δ = c/2ω √ ε, and the transmission through a flat metal layer of thickness t 1 can be expressed as T = exp (−t 1 /ε). Thus, the value of t 1 needs to be appropriate for the aim of effectively absorbing the incident light, while the metal layer at the bottom needs to be thick enough to be regarded as a Bragg reflector. At the same time, the silicon grating can also provide the in-plane momentum required for the incident radiation (if appropriately polarized) to excite a surface plasmon polariton (SPP), resulting in strong optical absorption [20]. A normally incident Transverse Magnetic (TM) light is incident along the negative y-axis with the polarization along with the x-axis. Assume that the relevant geometrical dimensions in Figure 1a are Λ = 3600 nm, f a = 0.167, h 1 = 600 nm, h 2 = 200 nm, and t 1 = 1200 nm. Here, as shown in Figure 2a, through the simulation with finite element method on the commercial software COMSOL V5.3, the absorption spectrum of the proposed grating based on SiO 2 -Au-sandwich substrate (SASS) and SiO 2 -Fe-sandwich substrate (SFSS) are plotted over the wavelength range from 750 nm to 2000 nm, respectively. It can be obtained that, for SASS, the absorption peaks are sharp and discrete and only a tiny minority of them in the visible and near-infrared can surpass 90%. On the contrary, for SFSS, we can find that there are three main absorption bands that can satisfy the average absorption up to 95%, respectively located at 843.4 nm to 990.1 nm (AB 1 ), 1095.9 nm to 1377.8 nm (AB 2 ), and 1729.9 nm to 1884.5 nm (AB 3 ). To better explain the grating in the absorption performances using Fe, we also give a detailed calculation and analysis based on the impedance transformation method. Here, an effective medium theory is utilized to analyze the impedance matching condition [21]. The relation between the S parameters and impedance Z can be expressed as: where S 11 , S 21 , S 12 , S 22 , n, k, and d are S parameters, the effective refractive index, the wave vector, and thickness of the structure, respectively. The absorption rate can be regarded as: In the proposed structure, we have S 12 = 0; as a result, to obtain a broadband absorption spectrum, the ideal condition is Z = 1, S 11 = 0, and then A = 1. According to previous research, the skin depth of Fe is about twice as large as Au, which means that the impedance of SFSS can satisfy the impedance matching condition over a wide wavelength range, and thus it is easy to observe a higher level of absorption due to a better impedance matching between the SFSS and the free space, as shown in Figure 2a. Furthermore, to have a deeper insight into the wideband absorption mechanism, we also plot the electric field distributions of the SFSS at 912.1 nm, 1194.4 nm, and 1855.2 nm as shown in Figure 2b-d, respectively. In Figure 2b,c, the electric field focus on the upper SiO 2 layer between silicon grating and Fe layer, which means that the absorption mainly originates from the excitation of Local Surface Plamons (LSP) resonance mode in the upper SiO 2 layer. However, the electric field in Figure 2d distributes not only in the the upper SiO 2 layer but also in the bottom SiO 2 layer, thus in this case the absorption is mainly attributed by the hybridization between LSP mode and Polar Surface Plasmons (PSP) mode. In addition, we also discuss the influences of parameters t 1 and h 2 to the absorption performance of the ordinary grating supported by SFSS, respectively. As shown in Figure 2e, the bandwidths of AB 1 and AB 2 keep invariable as t 1 increases from 10 nm to 40 nm, while the bandwidth of AB 3 breaks up into two narrow bands gradually when t 1 ≥ 21 nm. Since the variation of the thickness of Fe layer in SFSS can easily tune the PSP mode in the bottom SiO 2 layer, it is verified that only AB 3 is connected with the PSP mode. Meanwhile, all three of these bands split to several parts as h 2 increases from 150 nm to 300 nm as shown in Figure 2f, and, particularly, AB 1 and AB 2 are nearly disappearing when h 2 ≥ 250 nm. Hence, all three of these bands can be efficiently modulated by the LSP mode in the upper SiO 2 layer. These relationships are well matched with the electric fields as plotted in Figure 2b-d. To further broaden the absorption band, we also calculate the absorption performance of the proposed structure with multiple Fe layers in SFSS. Since LSP mode excited by the incident light acts as the "bright" mode and PSP mode excited by the field confined in the multiple layers acts as the "dark" mode, the proposed structure can be interpreted as a series of harmonic oscillators driven by the external forces [22]. The absorption peaks mainly consist of bright and dark oscillators and the number of them can increase due to the resonance coupling between PSP of neighboring SiO 2 layers. As shown in Figure 1b, the thicknesses of three Fe layers in the proposed structure are defined as t 1 = 20 nm, t 2 = 20 nm, and t 3 = 30 nm, respectively, and the thickness of SiO 2 is defined as Design for Two-Part-Period and Three-Part-Period Gratings Based on SFSS According to the simulation of multiple SFSS above, in order to broaden the absorption band of our proposed structure to mid-infrared, the LSP resonance mode in the strips of grating need to be enhanced. Here, we resort to a different LSP configuration, in which each period is composed of two grating ridges with identical width, as shown in Figure 1c. In essence, this kind of grating enables a rich set of Fourier harmonics with concomitant emergence of additional spectral features not available for the classic period grating as shown in Figure 1a,b. The periodic constant, the thickness of Fe and SiO 2 layers of the proposed two-part-period grating are equal to the corresponding values in Figure 1b. Assume that the total width of the two grating grooves in each period is kept constant, that is, f b + f c = 0.4. According to the rigorous coupled-wave theory [22], the grating Fourier harmonics ε n control the amplitude of evanescent diffraction fields and are responsible for the mutual interaction of the evanescent diffraction fields, which have the following forms [23]: where n Si and n Air represent the refractive indexes of silicon and air, respectively. n is equal to ±1, ±2, ... ± N... . As a result, by modulating the value of the fill factor f b and f c , we can achieve the variation of LSP mode in the two-part-period grating and then tune the bandwidth of absorption. Figure 4a displays the electric field at the resonate wavelength 2833.9 nm for the fill factor f b = 0.2. As expected, the enhanced electric field is mainly located in the strips of grating, which indicates that the absorption performance at the mid-infrared region originates from the excitation of LSP mode. As shown in Figure 4b, the amplitude of electric field cyclical distributes in free space, while in grating it is intensive to 1.4 × 10 4 V/m and then reduces rapidly in the SSFS. Particularly, when the value varies from 0.2 to 0.16, it is observed that the amplitude of the electric modal field in grating decreases, but the amplitude of the electric modal field in SFSS remains invariable, which demonstrates that the LSP mode in grating can be modulated by changing fill factor f b . Figure 4c shows the absorption spectrum with different fill factors, which can be divided into two parts: one is from 1500 nm to 2500 nm and the other is from 2500 nm to 6000 nm. In the first part, the absorption is ascribed to the hybridization between the LSP resonance in the strip of grating and the PSP resonances in the SiO 2 layer, thus the increase of f b makes the variation of absorption amplitude fluctuate smoothly. Note that this does not mean that the absorption efficiency in this system reduces as f b goes up because the positive relationship between the amplitude of electric field and f b as shown in Figure 4b is only appropriate for mid-infrared. Furthermore, it can be obtained that the absorption to incident light can reach 98.7% at about 1692.3 nm when f b is equal to 0.04. On the other hand, in the second wavelength part, the relationship between absorption amplitude and fill factor becomes more distinct since the resonance only relies on the LSP mode. In this case, the absorption can reach 94.2% at about 3473.7 nm when f b is 0.20. Compared with the origin structure in Figure 1b, we can only obtain absorption from 750.0 nm to 1850.5 nm in this two-part-period grating based on SFSS by modulating the fill factor, thus these results still do not satisfy our aim to realize an ultra-broadband absorption. Furthermore, on the basis of the proposed structures above, we design another three-part-period grating with SFSS as shown in Figure 1d. The grating Fourier harmonics ε * n of this kind of structure can be expressed as: Compared with Equation (2), the coefficient in front of f b is doubled due to two free space gaps in each periodic unit. To better understand the modulation property of LSP mode excited in grating, we calculate the amplitude of the first three modes of Fourier harmonics in two-part-period and three-part-period gratings as the function of fill factors f b and f c , respectively. Figure 5a-c show the amplitude of ε 0 , ε 1 and ε 2 with different fill factors according to Equation (2); the shadow area is not significant with the constraint condition f b + f c < 1. Since we formulate the f b + f c = 0.4, ε 0 remains unchanged and fixes at about 2.895 (red dotted line in Figure 5a), while ε 1 and ε 2 obviously vary. This means that the zeroth mode does not contribute to the modulation of absorbing intensity, and, on the contrary, the first and second orders are responsible for the modulation of absorption. Figure 5d-f show the amplitude of ε * 0 , ε * 1 , and ε * 2 , and with different fill factors according to Equation (3). We can find that the slope of ε * 0 is enlarged since the function of the zeroth mode is linear; meanwhile, both the variation periods of ε * 1 and ε * 2 narrow down since the function of them are sinusoidal and the fill factors are phase parameters. In addition, the modulating amplitudes of Fourier harmonics in three-part-period grating, especially the first order, are much better than those in two-part-period grating, which indicates that, by properly varying the fill factors, a wide absorption band over the mid-infrared could probably be realized in this structure. The calculated absorption performances of the three-part-period grating with three-layer SFSS are depicted in Figure 6. In Figure 6a, fix the value of f c at 0.2 and tune the value of f a from 0.08 to 0.24 (the corresponding value of f b decreases from 0.28 to 0.22); it is obtained that the absorption for incident light in this system reduces as a whole. Note that the relationship of all filling factors here can be expressed as 3 f a + 2 f b + f c = 1; the function curve in this case is plotted as the white dotted line in Figure 5d, thus the modulation of absorption in Figure 6a is highly dependent on the zeroth Fourier harmonic ε * 0 and second Fourier harmonic ε * 2 . In addition, according to the simulation results, we also plot the variation of upper limit of the absorption band as shown in Figure 6c, which slowly grows from 1941.2 nm to 2106.4 nm and then quickly reduces back to 1571.4 nm. Furthermore, as shown in Figure 6b, the absorption can be enhanced a lot if we fix the value of f a at 0.08 and meanwhile tune the value of f b from 0.200 to 0.067. In addition, it is obtained from Figure 6d that the upper limit of AB in our proposed structure can be broadened to 5351.6 nm when f b = 0.067, which means an effective absorption for incident light wave in the mid-infrared. Furthermore, the corresponding function curve between f b and f c is reported as the pink dotted line (2 f b + f c = 0.76) in Figure 5d. The amplitude of ε * 0 remains constant while both the amplitude of ε * 1 and ε * 2 decrease as f b decreases. This means that, with appropriate design for the three-part grating, the leaky guided modes can be excited solely through the first and second evanescent diffracted order of the grating. Finally, the ultra wide band of absorption in our optimized structure from 750.0 nm to 5351.6 nm is plotted as shown in Figure 6e; in particular, in the near infrared region up to 2158.8 nm, the absorption for light can maintain a level of more than 95%, which exhibits a much better absorbing capacity than other original grating devices. Figure 6f,g show the electric field distribution at the wavelength of 758.4 nm and 5003.2 nm, respectively. As with the distribution in two-part-period grating, the system responds with a powerful LSP resonance over the whole band and PSP resonances in multiple SiO 2 /Fe layers only contribute to the absorption in the near infrared. Conclusions In summary, we propose and numerically investigate a novel ultra-broadband absorber that is composed of a three-part-period grating coupled multiple Fe/SiO 2 layers. Considering the well impedance matching of the Fe stripe to free space, the PSP mode can be enhanced a lot by replacing Au with Fe stripe as well as adding the number of Fe stripes, while the influences of thickness of Fe and upper SiO 2 layer on resonance are also analyzed. On the other hand, it is verified that the LSP mode in grating is closely connected with the Fourier harmonics, thus novel two-part-period and three-part-period gratings supported by SFSS are designed and the filling factors are optimized to enhance the LSP resonance mode. In the end, an ultra-broadband absorption from 750.0 nm to 5351.6 nm is realized in our proposed absorber. Thus, our proposed structure possesses great merits in ultra broadband, low costs and fabrication simplicity, which has a variety of potential applications in terms of thermal emitters, optical protective device and solar energy harvesting. Conflicts of Interest: The authors declare no conflict of interest.
4,477.2
2019-06-01T00:00:00.000
[ "Physics" ]
Microfluidic Devices for Terahertz Spectroscopy of Live Cells Toward Lab-on-a-Chip Applications THz spectroscopy is an emerging technique for studying the dynamics and interactions of cells and biomolecules, but many practical challenges still remain in experimental studies. We present a prototype of simple and inexpensive cell-trapping microfluidic chip for THz spectroscopic study of live cells. Cells are transported, trapped and concentrated into the THz exposure region by applying an AC bias signal while the chip maintains a steady temperature at 37 °C by resistive heating. We conduct some preliminary experiments on E. coli and T-cell solution and compare the transmission spectra of empty channels, channels filled with aqueous media only, and channels filled with aqueous media with un-concentrated and concentrated cells. Introduction In recent years, biological and medical applications of THz technologies have developed rapidly, e.g., in cancer diagnosis, body imaging, and biological spectroscopy [1,2]. Studies show that the low-frequency collective vibrational modes of many large biomolecules (e.g., proteins and DNAs) and biological cells have a time scale on the order of picoseconds, which corresponds to the THz frequency range, i.e., 0.1 THz to 10 THz [3][4][5][6][7][8][9]. Therefore, THz spectroscopy may become a powerful label-free and non-invasive tool for studying the structure and behavior of a wide range of biological systems from molecular to organism levels. However, the experimental study of live cells in aqueous media is still a challenge mainly due to the large absorption of water, the lack of proper THz sources, and the sample preparation difficulties. The aqueous environment in which cells live has an important effect on both THz absorption and biological function. However, biological samples in earlier studies of THz spectroscopy are usually dehydrated due to the considerable absorption of water at THz frequencies [3,4]. Such dehydrated samples would lead to low vibrational mode intensity and poor reproducibility. Most works have underscored the importance of developing novel microfluidic sample holders to improve the measurement reproducibility and efficiency [5][6][7][8][9][10][11][12][13]. Microfluidics is a powerful technique used in the analysis of biological particles within an extremely small volume of liquid. A microscale channel can avoid excessive water absorption by the aqueous environment, thus enabling the spectroscopy measurement of live cells in aqueous media at THz frequencies. Microfluidics can also manipulate bioparticles for in situ sample preparation [14][15][16][17][18]. Biological samples, such as DNA, large proteins and individual cells, can be concentrated or trapped within a small region of the channel. Microfluidic devices have been applied in liquid characterization and bio-sensing at microwave frequencies [19][20][21][22]. The dielectric properties of liquid mixtures and cell solutions are characterized by measuring either the shift of resonant frequency or the waveguide impedance. In addition to microwave spectrum, THz band is very intriguing because of the existence of low-frequency vibrational modes from biomolecule-solvent dynamics and interactions, which play an important role in biological functions [3][4][5][6][7][8][9]. Some efforts have been reported on using microfluidic devices at THz frequencies [10][11][12][13], however, mostly for sensing liquid mixtures [11][12][13]. George, et al. proposed a PDMS-Zeonor microfluidic device and measured THz absorption spectra of bovine serum albumin [10]. In this study, we establish a simple and cost-effective microfluidic chip which can provide efficient and reliable measurement of THz spectra of live cell samples. We observe that different cells are concentrated at different locations close to the electrodes. The underlying trapping mechanism is explained by positive or negative dielectrophoretic force [15]. We show that the temperature of the channel can be controlled and maintained at 37˝C by resistive heating. Two sets of THz systems are used for the spectral measurement, including a time-domain pulsed system and a frequency-domain continuous wave system. Preliminary experiments of E. coli bacteria and T-cell solutions are conducted and the results are reported. This paper is organized as follows: Section 2 presents the chip configuration and performance, including the cell-concentration capability and the temperature distribution. Section 3 discusses the experimental setup and test procedure for the THz spectroscopic measurement. Section 4 presents the experimental results of E. coli and T-cell solutions, and discusses potential future directions. Section 5 summarizes the paper. Chip Design and Fabrication A schematic view of the microfluidic chip is shown in Figure 1a,d. The device consists of a glass substrate, a pair of parallel gold electrodes for dielectrophoretic cell manipulation, and a bonded PDMS microchannel. The pair of electrodes are deposited onto the quartz substrate by lift-off. The PDMS channel is then fabricated by soft lithography. PDMS is the most common elastomeric material used in microfluidic devices for biological applications because of its tunable surface properties, transparency to visible light, non-toxicity to cells, and ease of fabrication. The dielectric property of PDMS is firstly characterized at THz frequencies. The relative dielectric constant is about 2.5 and the loss tangent is about 0.05 around 0.3 THz. The major concern in the design of microfluidic devices is minimizing the absorption and multireflection loss from the device in the THz spectral measurement. Channel thickness is a key parameter to reduce the loss of aqueous media. Channel width is 5 mm. In this study the channel thickness is 300 µm, which is small enough for reducing absorption loss and large enough for cells to pass. The thickness of PDMS layer should be as thin as possible while maintaining its mechanical strength. In this study, the PDMS layer thickness is reduced to 1.44 mm. The width of the electrode is 100 µm, and the gap distance between the electrodes is 50 µm. Cell Trapping DEP arises from the interaction between a non-uniform electric field and the induced dipole of a polarizable object (Figure 1b). The dielectrophoretic force can transport the object toward the high electric field region or low electric field region depending on the effective polarization between the object and the medium. If the object has a higher polarizability, the force will push the object toward the high electric field strength region (positive DEP); otherwise, the force will point toward the low electric field strength region (negative DEP). An electric potential distribution of the chip is shown in Figure 1c. DEP has been demonstrated to effectively manipulate various types of biomolecules, particles and cells. The time-averaged dielectrophoretic force on a spherical object is given by [17]: where R is the particle radius, Erms is the root mean square electric field, ω is the angular frequency, and K(ω) is the Clausius-Mossotti factor, which describes the frequency variation of the effective polarizability of the particle in the medium. The Clausius-Mossotti factor is defined by: where * and * are the complex permittivities of the particle and medium, respectively. For a homogenous material, the complex permittivity is given by: Cell Trapping DEP arises from the interaction between a non-uniform electric field and the induced dipole of a polarizable object (Figure 1b). The dielectrophoretic force can transport the object toward the high electric field region or low electric field region depending on the effective polarization between the object and the medium. If the object has a higher polarizability, the force will push the object toward the high electric field strength region (positive DEP); otherwise, the force will point toward the low electric field strength region (negative DEP). An electric potential distribution of the chip is shown in Figure 1c. DEP has been demonstrated to effectively manipulate various types of biomolecules, particles and cells. The time-averaged dielectrophoretic force on a spherical object is given by [17]: where R is the particle radius, E rms is the root mean square electric field, ω is the angular frequency, and K(ω) is the Clausius-Mossotti factor, which describes the frequency variation of the effective polarizability of the particle in the medium. The Clausius-Mossotti factor is defined by: where εp and εm are the complex permittivities of the particle and medium, respectively. For a homogenous material, the complex permittivity is given by: where ε is the permittivity and σ is the conductivity of the particle and medium. In the experiment, the cell solution was pipetted into the microfluidic channel. Firstly cells distributed evenly in the microchannel before a voltage was applied to the parallel electrodes. Then a 1 MHz square wave AC signal with a peak-to-peak voltage (V pp ) up to 7V was applied across the electrodes. In our observation, cells moved toward the center of the channel and aggregated near the electrodes within a few minutes (Figure 1(a)). Figure 2 shows the bright field image of low-density E. coli bacteria in Luria Bertani (LB) medium and T-cells in RPMI medium (4.7ˆ10 3 cells/mL). The E. coli bacteria were concentrated inside the parallel electrodes due to positive DEP. In contrast, T-cells were pushed away from the electrodes by negative DEP. This ability potentially allows cell separation and selective detection of cells in different regions of the channel. The separation is primarily due to the intrinsic difference in dielectrophoretic responses of the cells, which have different Clausius-Mossotti factors [18]. where ε is the permittivity and σ is the conductivity of the particle and medium. In the experiment, the cell solution was pipetted into the microfluidic channel. Firstly cells distributed evenly in the microchannel before a voltage was applied to the parallel electrodes. Then a 1 MHz square wave AC signal with a peak-to-peak voltage (Vpp) up to 7V was applied across the electrodes. In our observation, cells moved toward the center of the channel and aggregated near the electrodes within a few minutes (Figure 1(a)). Figure 2 shows the bright field image of low-density E. coli bacteria in Luria Bertani (LB) medium and T-cells in RPMI medium (4.7 × 10 3 cells/mL). The E. coli bacteria were concentrated inside the parallel electrodes due to positive DEP. In contrast, T-cells were pushed away from the electrodes by negative DEP. This ability potentially allows cell separation and selective detection of cells in different regions of the channel. The separation is primarily due to the intrinsic difference in dielectrophoretic responses of the cells, which have different Clausius-Mossotti factors [18]. Thermal Distribution The temperature of the cell solution is another important consideration in biological experiments. Either too high or too low a temperature would lead to the inactivation of cells or biomolecule analytes. This is a problem especially for long-term experiments, e.g., studies of THzinduced biological effects and spectroscopy that need large numbers of sample averages. Taking the assessment of DNA damage as an example, the operation time can last from 1 to 24 hours depending on different requirements [2]. An incubator is normally used for maintaining a constant temperature. For our microfluidic chip, the AC bias of the chip generates a dielectrophoretic force inside the channel, and also leads to a Joule heating-induced temperature increase. Fortunately, the temperature of the cell solution can be controlled steadily by applying an appropriate bias voltage in a solution with known conductivity. Figure 3 presents infrared thermometry results of the microfluidic device filled with the cell solution under different bias conditions. The device shows a rather uniform temperature distribution near 35 °C with 7 V, which is close to the physiological temperature. For a Thermal Distribution The temperature of the cell solution is another important consideration in biological experiments. Either too high or too low a temperature would lead to the inactivation of cells or biomolecule analytes. This is a problem especially for long-term experiments, e.g., studies of THz-induced biological effects and spectroscopy that need large numbers of sample averages. Taking the assessment of DNA damage as an example, the operation time can last from 1 to 24 hours depending on different requirements [2]. An incubator is normally used for maintaining a constant temperature. For our microfluidic chip, the AC bias of the chip generates a dielectrophoretic force inside the channel, and also leads to a Joule heating-induced temperature increase. Fortunately, the temperature of the cell solution can be controlled steadily by applying an appropriate bias voltage in a solution with known conductivity. Figure 3 presents infrared thermometry results of the microfluidic device filled with the cell solution under different bias conditions. The device shows a rather uniform temperature distribution near 35˝C with 7 V, which is close to the physiological temperature. For a 10 V bias case, the temperature in the region of concentration can increase to over 50˝C. Figure 4 plots the maximum temperature in the channel as a function of the peak to peak voltage. A curving fitting is applied and plotted in the inset of Figure 4. The maximum temperature near the electrodes increases as a quadratical function of the bias voltage, as expected. 10 V bias case, the temperature in the region of concentration can increase to over 50 °C. Figure 4 plots the maximum temperature in the channel as a function of the peak to peak voltage. A curving fitting is applied and plotted in the inset of Figure 4. The maximum temperature near the electrodes increases as a quadratical function of the bias voltage, as expected. THz Spectroscopy of the Live Cell Microfluidic Chip Two different THz measurement systems are used in this work, i.e., a pulsed TDS system and a frequency-domain CW amplified multiplier chain system. In the TDS system, THz pulses are emitted and detected using near-infrared femtosecond laser pulses by a coherent and time-gated method [23]. It has advantages of high temporal resolution (broadband), fast response, and high SNR. It is very efficient in spectral measurement due to its simultaneous acquisition of signals in a broad bandwidth. However, the TDS system usually has a limited spectral resolution (usually ~10 GHz) because of the trade-off between the spectral resolution Δf and the temporal measurement window T (Δf=1/T). The maximum duration of the temporal window is limited by the repetition rates of the laser source, the length of the scanning delay, and fundamentally the noise level in the system [24]. The output power of TDS system is also limited at the level of μW. The frequency-domain CW system based on amplified multipliers has higher spectral resolution and larger transmitted power compared to the TDS system. However, the CW system takes much longer time in the spectroscopy measurement, and the bandwidth is usually limited for a certain system setup. 10 V bias case, the temperature in the region of concentration can increase to over 50 °C. Figure 4 plots the maximum temperature in the channel as a function of the peak to peak voltage. A curving fitting is applied and plotted in the inset of Figure 4. The maximum temperature near the electrodes increases as a quadratical function of the bias voltage, as expected. THz Spectroscopy of the Live Cell Microfluidic Chip Two different THz measurement systems are used in this work, i.e., a pulsed TDS system and a frequency-domain CW amplified multiplier chain system. In the TDS system, THz pulses are emitted and detected using near-infrared femtosecond laser pulses by a coherent and time-gated method [23]. It has advantages of high temporal resolution (broadband), fast response, and high SNR. It is very efficient in spectral measurement due to its simultaneous acquisition of signals in a broad bandwidth. However, the TDS system usually has a limited spectral resolution (usually ~10 GHz) because of the trade-off between the spectral resolution Δf and the temporal measurement window T (Δf=1/T). The maximum duration of the temporal window is limited by the repetition rates of the laser source, the length of the scanning delay, and fundamentally the noise level in the system [24]. The output power of TDS system is also limited at the level of μW. The frequency-domain CW system based on amplified multipliers has higher spectral resolution and larger transmitted power compared to the TDS system. However, the CW system takes much longer time in the spectroscopy measurement, and the bandwidth is usually limited for a certain system setup. THz Spectroscopy of the Live Cell Microfluidic Chip Two different THz measurement systems are used in this work, i.e., a pulsed TDS system and a frequency-domain CW amplified multiplier chain system. In the TDS system, THz pulses are emitted and detected using near-infrared femtosecond laser pulses by a coherent and time-gated method [23]. It has advantages of high temporal resolution (broadband), fast response, and high SNR. It is very efficient in spectral measurement due to its simultaneous acquisition of signals in a broad bandwidth. However, the TDS system usually has a limited spectral resolution (usually~10 GHz) because of the trade-off between the spectral resolution ∆f and the temporal measurement window T (∆f =1/T). The maximum duration of the temporal window is limited by the repetition rates of the laser source, the length of the scanning delay, and fundamentally the noise level in the system [24]. The output power of TDS system is also limited at the level of µW. The frequency-domain CW system based on amplified multipliers has higher spectral resolution and larger transmitted power compared to the TDS system. However, the CW system takes much longer time in the spectroscopy measurement, and the bandwidth is usually limited for a certain system setup. In this work, we applied both THz measurement systems. A TDS system (T-Ray 2000 from Picometrix, Ann Arbor, MI, USA) was firstly used for measuring the E. coli samples. The spectral resolution is 10 GHz. Secondly, an amplifier multiplier chain based CW system (VDI-AMC-S156, from Virginia Diodes, Inc., Charlottesville, VA, USA) was used for the measurement of the T-cell samples. The spectral resolution is 1 GHz. The bandwidth is from 0.14 to 0.22 THz. The output power is about 0.5 to 3.5 mW. Figure 5 illustrates the details of the experimental setup. A perpendicular optical path is configured so that the microfluidic chip can be placed horizontally. The beam width of the THz wave is about 1 cm. A sample holder with a small aperture is used to secure the chip. The aperture size is 4 mm in diameter so that the THz radiation is confined within the channel. In this work, we applied both THz measurement systems. A TDS system (T-Ray 2000 from Picometrix, Ann Arbor, MI, USA) was firstly used for measuring the E. coli samples. The spectral resolution is 10 GHz. Secondly, an amplifier multiplier chain based CW system (VDI-AMC-S156, from Virginia Diodes, Inc., Charlottesville, VA, USA) was used for the measurement of the T-cell samples. The spectral resolution is 1 GHz. The bandwidth is from 0.14 to 0.22 THz. The output power is about 0.5 to 3.5 mW. Figure 5 illustrates the details of the experimental setup. A perpendicular optical path is configured so that the microfluidic chip can be placed horizontally. The beam width of the THz wave is about 1 cm. A sample holder with a small aperture is used to secure the chip. The aperture size is 4 mm in diameter so that the THz radiation is confined within the channel. Measurement Procedure The measurement procedure is given as follows: (1) measure the transmission spectra of the aperture as a reference; (2) measure the spectra of an empty chip on the holder; (3) pipette the LB medium for the E. coli bacteria measurement or the RPMI medium for the T-cell measurement (without any cells) into the chip and measure the spectra; (4) inject T-cell solutions and measure the spectra without the AC bias being applied; (5) turn on the AC bias and wait for certain period of time until the cells are concentrated as shown in Figure 2; (6) measure the spectra of cell-trapping case. For each experiment, we repeat the same procedure with independent setup for three times to reduce manual alignment errors and system uncertainties. The averaged measurement results for the above sequential experiments will be presented as: (a) the aperture only (aperture), (b) the empty chip with the aperture (empty chip), (c) the chip filled with medium without any cells (medium), (d) the chip filled with cell solution without biasing (Voff), and (e) the chip filled with concentrated cell solution (Von). Figure 6 shows the measured time-domain signals of E. coli in LB medium with the TDS system. Figure 7 plots the corresponding spectra obtained by Fast-Fourier Transforms (FFT) with a 30 ps truncation of the temporal window to reduce the influence from multireflections. The noise level is about −30 dB, which is measured by blocking the detector and turning off the THz source. Measurement Procedure The measurement procedure is given as follows: (1) measure the transmission spectra of the aperture as a reference; (2) measure the spectra of an empty chip on the holder; (3) pipette the LB medium for the E. coli bacteria measurement or the RPMI medium for the T-cell measurement (without any cells) into the chip and measure the spectra; (4) inject T-cell solutions and measure the spectra without the AC bias being applied; (5) turn on the AC bias and wait for certain period of time until the cells are concentrated as shown in Figure 2; (6) measure the spectra of cell-trapping case. For each experiment, we repeat the same procedure with independent setup for three times to reduce manual alignment errors and system uncertainties. The averaged measurement results for the above sequential experiments will be presented as: (a) the aperture only (aperture), (b) the empty chip with the aperture (empty chip), (c) the chip filled with medium without any cells (medium), (d) the chip filled with cell solution without biasing (V off ), and (e) the chip filled with concentrated cell solution (V on ). Figure 6 shows the measured time-domain signals of E. coli in LB medium with the TDS system. Figure 7 plots the corresponding spectra obtained by Fast-Fourier Transforms (FFT) with a 30 ps truncation of the temporal window to reduce the influence from multireflections. The noise level is about´30 dB, which is measured by blocking the detector and turning off the THz source. From Figure 7, we can see that the empty chip results in a 4-20 dB loss of transmission, mainly due to PDMS absorption, multi-reflections and scatterings from the microfluidic chip. The aqueous medium in the 300 µ m channel leads to a few (<10) dB loss at low frequency region and 10-22 dB loss above 0.2 THz. It can be observed that the transmission loss for the Voff cell solution case is consistently higher than that of the medium only case, indicating the THz response due to the un-concentrated cells. Moreover, the transmission loss for the Von cell solution case is mostly higher than that of the Voff cell solution case, indicating higher THz absorption by the concentrated cells. E. coli Solution In particular, we observe a 2 dB absorption increase near 0.11 THz, and similarly a 7 dB absorption peak in the spectral difference between the medium and Von case (see Figure 8) at the same frequency. There are also some transmission peaks at higher frequencies for both curves in Figure 7, especially for the concentrated cells (Von case). However, due to the relatively low signal to noise ratio, it is not clear whether those are THz resonances from the cells. From Figure 7, we can see that the empty chip results in a 4-20 dB loss of transmission, mainly due to PDMS absorption, multi-reflections and scatterings from the microfluidic chip. The aqueous medium in the 300 µ m channel leads to a few (<10) dB loss at low frequency region and 10-22 dB loss above 0.2 THz. It can be observed that the transmission loss for the Voff cell solution case is consistently higher than that of the medium only case, indicating the THz response due to the un-concentrated cells. Moreover, the transmission loss for the Von cell solution case is mostly higher than that of the Voff cell solution case, indicating higher THz absorption by the concentrated cells. In particular, we observe a 2 dB absorption increase near 0.11 THz, and similarly a 7 dB absorption peak in the spectral difference between the medium and Von case (see Figure 8) at the same frequency. There are also some transmission peaks at higher frequencies for both curves in Figure 7, especially for the concentrated cells (Von case). However, due to the relatively low signal to noise ratio, it is not clear whether those are THz resonances from the cells. From Figure 7, we can see that the empty chip results in a 4-20 dB loss of transmission, mainly due to PDMS absorption, multi-reflections and scatterings from the microfluidic chip. The aqueous medium in the 300 µm channel leads to a few (<10) dB loss at low frequency region and 10-22 dB loss above 0.2 THz. It can be observed that the transmission loss for the V off cell solution case is consistently higher than that of the medium only case, indicating the THz response due to the un-concentrated cells. Moreover, the transmission loss for the V on cell solution case is mostly higher than that of the V off cell solution case, indicating higher THz absorption by the concentrated cells. In particular, we observe a 2 dB absorption increase near 0.11 THz, and similarly a 7 dB absorption peak in the spectral difference between the medium and Von case (see Figure 8) at the same frequency. There are also some transmission peaks at higher frequencies for both curves in Figure 7, especially for the concentrated cells (Von case). However, due to the relatively low signal to noise ratio, it is not clear whether those are THz resonances from the cells. Figure 9 presents the measured transmission spectra of T-cells in RPMI medium (4.7 × 10 3 cells/mL) using the CW system. The cell solution case has about 10 dB higher THz transmission loss compared to the empty channel case. Figure 10 plots the spectral difference between the case of Von and Voff. The average transmission of the concentrated T-cell sample is still slightly lower up to 0.19 THz before the signal to noise ratio becomes much worse. Comparing to the E. coli bacteria case, the spectral difference between the un-concentrated and concentrated T-cell samples is much smaller. One of the reasons is that the trapping position for T-cells is at the outer edges of the electrodes (as shown in Figure 2) where the THz field intensity is smaller than that at the inner gap. In addition, the cell density of the T-cell sample is much smaller than that of the E. coli sample. It might also be possible that the intrinsic THz properties of T-cells are close to the RPMI medium, although further investigation is necessary to be conclusive. Figure 9 presents the measured transmission spectra of T-cells in RPMI medium (4.7ˆ10 3 cells/mL) using the CW system. The cell solution case has about 10 dB higher THz transmission loss compared to the empty channel case. Figure 10 plots the spectral difference between the case of V on and V off . The average transmission of the concentrated T-cell sample is still slightly lower up to 0.19 THz before the signal to noise ratio becomes much worse. Comparing to the E. coli bacteria case, the spectral difference between the un-concentrated and concentrated T-cell samples is much smaller. One of the reasons is that the trapping position for T-cells is at the outer edges of the electrodes (as shown in Figure 2) where the THz field intensity is smaller than that at the inner gap. In addition, the cell density of the T-cell sample is much smaller than that of the E. coli sample. It might also be possible that the intrinsic THz properties of T-cells are close to the RPMI medium, although further investigation is necessary to be conclusive. Figure 9 presents the measured transmission spectra of T-cells in RPMI medium (4.7 × 10 3 cells/mL) using the CW system. The cell solution case has about 10 dB higher THz transmission loss compared to the empty channel case. Figure 10 plots the spectral difference between the case of Von and Voff. The average transmission of the concentrated T-cell sample is still slightly lower up to 0.19 THz before the signal to noise ratio becomes much worse. Comparing to the E. coli bacteria case, the spectral difference between the un-concentrated and concentrated T-cell samples is much smaller. One of the reasons is that the trapping position for T-cells is at the outer edges of the electrodes (as shown in Figure 2) where the THz field intensity is smaller than that at the inner gap. In addition, the cell density of the T-cell sample is much smaller than that of the E. coli sample. It might also be possible that the intrinsic THz properties of T-cells are close to the RPMI medium, although further investigation is necessary to be conclusive. Chip Optimization Discussion The output power of THz source is usually limited. From Figures 7 and 9, we can see that there is still large loss of the signal due to multi-reflections, scatterings and material absorptions from the microfluidic chip itself. The majority loss still comes from the aqueous media which limits the discriminative capability of such device. The presented 300 µ m microfluidic channel already shows the advantage of reducing the absorption at THz frequencies. Chips with 800 and 600 µ m channel thickness were also tested, but no detectable output signals were measured due to the large absorption of the excessive aqueous media. On the other hand, the channel thickness should be large enough for cell passing. There are other solutions to further improve the SNR and to extend the spectral measurement to higher frequencies, e.g., by reducing the substrate thickness or utilizing low refraction index material for the substrate [25]. It can also help increase the signal intensity by using dielectric lens instead of using the aperture to focus the THz beam. THz sources, waveguides, and field enhancement structures can also be integrated to the chip to increase the interactions between cells and THz waves [11,26]. Conclusions The work in this paper demonstrates an initial proof-of-concept for cell concentration, steady temperature control, and THz spectral measurement of live cells. The DEP-based cell manipulation capability has been successfully demonstrated. The microfluidic chip also provides the desired steady controllable temperature environment. Both time-domain and frequency-domain THz spectroscopy systems are used, each having its own advantages. Our experimental results on empty channels, channels filled with aqueous media only, and channels filled with un-concentrated and concentrated cell solutions show different THz transmission responses. In general, the concentrated cell samples are more absorptive than the un-concentrated case. An absorption peak is observed near 0.11 THz for both the un-concentrated and concentrated E. coli bacteria sample, which might indicate an absorption signature of E. coli bacteria. No absorptive signatures are observed for T-cell case. The ultimate goal of this work is to develop lab-on-a-chip devices at THz frequencies integrating functions including sample preparation, bio-particle transportation and concentration, and effective THz bio-sensing and spectroscopic study. This work not only shows encouraging results but also helps identify improvements needed, including the optimization of the chip design and THz source. Chip Optimization Discussion The output power of THz source is usually limited. From Figures 7 and 9 we can see that there is still large loss of the signal due to multi-reflections, scatterings and material absorptions from the microfluidic chip itself. The majority loss still comes from the aqueous media which limits the discriminative capability of such device. The presented 300 µm microfluidic channel already shows the advantage of reducing the absorption at THz frequencies. Chips with 800 and 600 µm channel thickness were also tested, but no detectable output signals were measured due to the large absorption of the excessive aqueous media. On the other hand, the channel thickness should be large enough for cell passing. There are other solutions to further improve the SNR and to extend the spectral measurement to higher frequencies, e.g., by reducing the substrate thickness or utilizing low refraction index material for the substrate [25]. It can also help increase the signal intensity by using dielectric lens instead of using the aperture to focus the THz beam. THz sources, waveguides, and field enhancement structures can also be integrated to the chip to increase the interactions between cells and THz waves [11,26]. Conclusions The work in this paper demonstrates an initial proof-of-concept for cell concentration, steady temperature control, and THz spectral measurement of live cells. The DEP-based cell manipulation capability has been successfully demonstrated. The microfluidic chip also provides the desired steady controllable temperature environment. Both time-domain and frequency-domain THz spectroscopy systems are used, each having its own advantages. Our experimental results on empty channels, channels filled with aqueous media only, and channels filled with un-concentrated and concentrated cell solutions show different THz transmission responses. In general, the concentrated cell samples are more absorptive than the un-concentrated case. An absorption peak is observed near 0.11 THz for both the un-concentrated and concentrated E. coli bacteria sample, which might indicate an absorption signature of E. coli bacteria. No absorptive signatures are observed for T-cell case. The ultimate goal of this work is to develop lab-on-a-chip devices at THz frequencies integrating functions including sample preparation, bio-particle transportation and concentration, and effective THz bio-sensing and spectroscopic study. This work not only shows encouraging results but also helps identify improvements needed, including the optimization of the chip design and THz source. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
7,742
2016-04-01T00:00:00.000
[ "Physics" ]
Challenges at the APOE locus: a robust quality control approach for accurate APOE genotyping Background Genetic variants within the APOE locus may modulate Alzheimer’s disease (AD) risk independently or in conjunction with APOE*2/3/4 genotypes. Identifying such variants and mechanisms would importantly advance our understanding of APOE pathophysiology and provide critical guidance for AD therapies aimed at APOE. The APOE locus however remains relatively poorly understood in AD, owing to multiple challenges that include its complex linkage structure and uncertainty in APOE*2/3/4 genotype quality. Here, we present a novel APOE*2/3/4 filtering approach and showcase its relevance on AD risk association analyses for the rs439401 variant, which is located 1801 base pairs downstream of APOE and has been associated with a potential regulatory effect on APOE. Methods We used thirty-two AD-related cohorts, with genetic data from various high-density single-nucleotide polymorphism microarrays, whole-genome sequencing, and whole-exome sequencing. Study participants were filtered to be ages 60 and older, non-Hispanic, of European ancestry, and diagnosed as cognitively normal or AD (n = 65,701). Primary analyses investigated AD risk in APOE*4/4 carriers. Additional supporting analyses were performed in APOE*3/4 and 3/3 strata. Outcomes were compared under two different APOE*2/3/4 filtering approaches. Results Using more conventional APOE*2/3/4 filtering criteria (approach 1), we showed that, when in-phase with APOE*4, rs439401 was variably associated with protective effects on AD case-control status. However, when applying a novel filter that increases the certainty of the APOE*2/3/4 genotypes by applying more stringent criteria for concordance between the provided APOE genotype and imputed APOE genotype (approach 2), we observed that all significant effects were lost. Conclusions We showed that careful consideration of APOE genotype and appropriate sample filtering were crucial to robustly interrogate the role of the APOE locus on AD risk. Our study presents a novel APOE filtering approach and provides important guidelines for research into the APOE locus, as well as for elucidating genetic interaction effects with APOE*2/3/4. Supplementary Information The online version contains supplementary material available at 10.1186/s13195-022-00962-4. Introduction APOLIPOPROTEIN E*4 (APOE*4) is the strongest genetic risk factor for late-onset Alzheimer's disease (AD) [1]. In subjects of European ancestry, one copy of APOE*4 increases the risk of a clinical diagnosis of AD by about 3-fold and two copies increase the risk by about 12-fold [2,3]. APOE*2 on the other hand decreases the risk of AD by about half [3], while APOE*3 is the reference allele. Beyond the two common missense variants that compose APOE*2/3/4 (rs429358 and rs7412), there may be other coding variants on APOE or non-coding regulatory variants in the APOE locus that further impact AD risk, either independently or in conjunction with APOE*2/3/4 [4][5][6][7][8][9][10][11][12][13][14][15]. This pertains, by example, to a crucial question in the field: why do some APOE*4 carriers remain asymptomatic even into advanced old age? One possibility is that there may be genetic variants in the APOE locus that affect APOE*4 availability and in turn mitigate APOE*4-related risk for AD. Identifying such variants would importantly advance our understanding of APOE*4 pathophysiology and provide critical guidance for AD therapies aimed at APOE*4 [16,17]. Despite its therapeutic promise and three active decades of research, the APOE locus remains relatively poorly understood in AD. While there are multiple reasons contributing to this, one prominent one is that the APOE locus harbors multiple nearby genes and shows a complex linkage disequilibrium (LD) structure with APOE*2/3/4, making it difficult to identify causal variants and interaction effects [18,19]. Other important reasons are that relevant risk variants may be rare, thus requiring large sample sizes, and that the quality of the APOE*2/3/4 genotype can bear heavily on correctly identifying interaction effects and causal haplotypes. The latter may be of particular relevance given the plethora of available protein-based (e.g., two-dimensional gel electrophoresis and MALDI-TOF mass spectrometry) and DNA-based methods (e.g., TaqMan assays, high-resolution melting analysis, PCR sequencing, etc.) for APOE*2/3/4 genotyping [20][21][22][23][24]. Importantly, these methods have variable quality and limitations related to the haplotypic nature of APOE*2/3/4. For instance, protein-based assays may suffer from biases in detecting different APOE isoforms, while DNA-based assays can be affected by rare variants in the genomic region near APOE*2/3/4 (cf. Huang et al. for a detailed review) [25]. In turn, cohorts that are commonly included in genetic association studies of AD have used variable APOE genotyping methods [26][27][28][29][30], which has thus led to variable APOE*2/3/4 genotype quality across cohorts used in meta-analyses. The approach used to quality control the APOE*2/3/4 genotype is therefore critical to ensure robust association analyses. While the need for stringent APOE quality control is not necessarily novel, to our knowledge, there is currently no specific study that clearly addresses this issue, nor are there are any consensus guidelines. In this study, we present analysis approaches and related findings to guide future research in the APOE locus. Specifically, we show findings for a large-scale analysis of rs439401 and its association with AD risk. This variant, located 1801 base pairs downstream of APOE, was recently identified as a brain APOE splice quantitative trait locus (sQTL) in GTEx [31,32], spurring our interest to investigate it. We hypothesized it may affect APOE*4-related risk for AD and observed that it is most often seen on the same chromosome copy as APOE*3 (i.e., is in-phase with APOE*3), but in rare instances was seen together with APOE*4. We thus stratified analyses according to APOE*3 and APOE*4 genotypes to evaluate whether effects depended on the variant being inphase with APOE*4. We use analyses on this variant to illustrate how critical it is to have accurate APOE*2/3/4 genotype data. Based on initial analyses using a conventional APOE filtering approach and a subsequent robustness assessment, we designed and present a novel APOE filtering approach that we believe will be highly relevant to help guide further reproducible research in this area. whole-genome sequencing (WGS) ( Table S1). The discovery samples comprised publicly available case-control (majority), family-based, population-based, and longitudinal cohorts. Independent replication samples, genotyped on SNP microarrays, were available from three large cohorts: the Rotterdam study, a population-based prospective study, the European Alzheimer Disease Initiative (EADI), roughly two-thirds of which is from a prospective population-based study and one third from case-control samples, and the European Alzheimer & Dementia BioBank (EADB), which collated AD case-control samples from 15 European countries. Ascertainment of genotype/phenotype data for each cohort/project are described in detail elsewhere [33, 40-44, 46, 47, 54]. Cross-sample genotype/phenotype harmonization for the discovery samples is summarized in Supplementary Methods. Phenotypes from respective cohorts were updated as of March 2021. Data were analyzed between December 2019 and June 2021. Genetic data quality control and processing Genetic data in the discovery samples underwent standard quality control (QC; Plink v1.9) and ancestry determination (SNPweights v2.1; Fig. S1) [57]. Only non-Hispanic subjects of European ancestry (representing the vast majority of samples) were selected for processing. Data were restricted to those providing coverage of the rs439401 variant. Principal component analysis of genotyped SNPs provided principal components (PCs) capturing population substructure (PC-AiR, Fig. S2) [58]. Identity-by-descent (IBD) analyses reliably identified kinship down to 3rd degree relatedness (PC-Relate, Fig. S3) [58]. Sparse genetic relationship matrices (GRM) were constructed to enable analyses including related individuals [59]. SNP array data were used to perform genotype imputation with regard to the TOPMed imputation reference panel [60,61]. Genetic processing of Rotterdam, EADI, and EADB replication samples is described elsewhere [33,54]. Detailed descriptions of all processing steps are in Supplementary Methods and Table S2. Ascertainment of rs439401 The rs439401 variant was originally included in our analyses as it had a cross-cohort genotyping rate >80% in the discovery samples. Genotypes were considered from either the direct call on the SNP array data (i.e., called from probe intensity data) or the call from WGS data. We specifically relied solely on directly genotyped data rather than using imputed data in order to obtain unbiased results. This choice was additionally motivated reasoning that putative rare haplotypes may not be accurately imputed, particularly when using the commonly younger (non-AD) individuals in imputation reference panels [60,62,63]. Genotype reliability for the variant was verified by cross-correspondence across 3804 duplicate samples in the discovery and by assessing genotype intensity data on the SNP microarray in EADB. Ascertainment of APOE genotypes Throughout, we will refer to APOE*2/3/4 genotypes as APOE genotypes. APOE genotypes were available from (1) cohort demographics (i.e., "provided" APOE), which generally had APOE genotype status determined through various direct genotyping methods (detailed elsewhere [33,54]), (2) directly from WES/WGS calls, or (3) through imputation of rs429358 (which captures the APOE*4 allele) and rs7412 (which captures the APOE*2 allele). It is relevant to note that rs429358 was never directly available on the SNP microarrays. It is further relevant to note that for the current WES data from ADSP, rs7412 was not available, with only rs429358 being reliably called in most subjects. The WES data could thus be used only to verify subjects with a provided APOE*3/3, 3/4, or 4/4 status (cf. Supplementary Methods). APOE genotype filtering criteria To our understanding, common criteria across prior studies regarding APOE genotypes can be summarized as giving priority to provided APOE genotypes when available (as direct genotyping methods are generally considered the gold standard), followed by using APOE genotypes derived from rs429358 and rs7412 when directly called on a SNP microarray, followed by inference of APOE genotypes through (high quality) imputation of rs429358 and rs7412. There is no clear consensus on whether or how any discrepancies across available APOE genotypes for a given subject should be adjudicated. Furthermore, with the recent increasing availability of WGS/ WES data in the AD field [42,46,51], these data can now also be used to verify APOE genotypes. When high-quality WGS/WES calls are available for rs429358 and rs7412 (i.e., good read depth/quality with a clear reference/alternate allele distribution) [64], the derived APOE genotype may be considered the ground truth. Recent work indeed suggests that a higher APOE genotype accuracy can be achieved using next-generation sequencing compared to conventional gold standard methods [65]. APOE filtering approach 1 Based on the above considerations, we designed criteria to use APOE genotypes according to the highest available quality. Specifically, when multiple APOE genotypes were available for a given subject, the APOE genotype we selected followed the priority of WGS/WES over provided/demographic sources (for details regarding "provided/demographic" APOE sources, please cf. above in the section "Ascertainment of APOE genotypes"). If APOE genotype was only available from provided/demographic sources and was discordant across duplicate samples, then those samples were flagged for exclusion (N = 73 out of 1501 (4.86%) unique subjects). Similarly, the correspondence between APOE genotypes derived from WES and WGS across duplicate samples was checked and only showed discordance in five subjects differing for APOE*2/3 and APOE*3/3 genotypes across the WES and WGS data (these subjects were excluded). The final set of samples used for association analyses thus did not display any mismatches in prioritized APOE genotypes across duplicates, but in some instances, the APOE genotype from provided/demographic versus WES/WGS sources differed. APOE status as inferred from imputation was entirely ignored, reasoning this was less reliable and that rare haplotypes of potential interest in the APOE locus may lead to false imputation of APOE*2/3/4 genotype. APOE filtering approach 2 After further assessment of the initial results, we had concerns about the reliability of APOE genotype status in some APOE*4 subjects carrying rs439401 (cf. Results). We therefore expanded the first approach to exclude any subjects who had their prioritized APOE genotype determined from provided/demographic APOE but were still discordant with their imputed APOE genotype (N = 632 out of 12,753 (4.96%) in the discovery sample after passing all other filtering steps). Note that imputation scores (R 2 ) for rs429358 and rs7412 were never lower than 0.8. Information regarding APOE imputation, as well as several correspondence checks across different sources of APOE genotypes, are provided in the supplementary and referenced in the "Results" section. An additional check for APOE genotype consistency was also performed using newly released sequencing data from the ADSP (NG00067.v5) [66], processed in May 2021 (cf. Supplementary Methods and the "Results" section). An overview of the study design and APOE filtering approaches is presented in Fig. 1. Simulations of concordance rates between observed and true APOE*4/4 genotypes In order to understand potential uncertainty in APOE*4/4 genotypes, we simulated different type I and II error rates for APOE*4/4 status. Type I error rate was defined as the probability, p1, to mis-classify non-APOE*4/4 carriers as APOE*4/4. Type II error rate was defined as the probability, p2, to mis-classify APOE*4/4 carriers as non-APOE*4/4. We considered a range of true frequencies, f true , for APOE*4/4 cases and controls respectively with regard to all cases and controls (that is, all APOE strata). This range for f true was centered on observations in the current discovery samples, which should represent a reasonable approximation of expected frequencies in casecontrol samples. The observed frequency, f obs , was then defined as f true *(1-p2)+(1-f true )*p1. The concordance rate between observed and true APOE*4/4 was finally defined as f true *(1-p2)/f obs . Statistical analyses Primary analyses evaluated associations of rs439401 with relative risk for AD in APOE*4/4 carriers using additive genetic models. In additional supporting analyses, associations were evaluated in APOE*3/4 carriers, comparing wild-type (WT) to homozygote (HOM) genotypes, ensuring rs439401 was in-phase with APOE*4. The expectation here was to observe similar but attenuated effects compared to associations in APOE*4/4 carriers. Additional associations were evaluated in APOE*3/4 and 3/3 carriers using additive genetic models, with the expectation of observing little or no effect if associations were conditional on being in-phase with APOE*4. APOE*2/4 carriers were not considered given sample paucity. Analyses were restricted to subjects aged 60 and above, consistent with age cutoffs in prior genetic studies of AD [54]. Replication analyses focused only on evaluating variants in-phase with APOE*4. Lastly, to provide additional insight into the putative role of rs439401 in AD, we evaluated the association of rs439401 with relative risk for AD in the full discovery sample, while adjusting for APOE*2 and APOE*4 dosage. Cohorts in the discovery were combined into a single mega-analysis, included related subjects, and outcome measures were adjusted for age, sex, the first five genetic PCs, and the GRM. In full sample analyses, models further included APOE*2 and APOE*4 dosage as covariates. In replications, models included only unrelated subjects and were not adjusted for the GRM. EADI and Rotterdam further adjusted for the first three genetic PCs, while EADB adjusted for the first 20 genetic PCs and genotyping center. Notably, models in the discovery mega-analyses did not adjust for cohort, reasoning that this may inadvertently diminish power given variable cohort sizes and carrier distributions. This is especially relevant in case of lower frequency variants in the APOE*4/4 stratum, where cohort bins and the number of allele observations become very small. Still, to address potential concerns regarding cohort biases, in sensitivity analyses, the effect of cohort adjustment in the discovery was evaluated. Associations with AD risk were evaluated under a case-control design using linear mixed-model regression in analyses of related subjects and logistic regression in analyses of unrelated subjects. Additional details for model/inclusion criteria are in Supplementary Methods. Association analyses were considered significant below a threshold P-value of 0.05. All analyses were performed in R v3.6.0. Participant demographics and rs439401 linkage structure Across all 142,075 genotyped samples considered in this study (Table S1), 65,701 unique participants passed filtering and inclusion criteria. Participant demographics for APOE*4/4 and 3/4 carriers are in Table 1, while detailed full sample demographics are in Table S3-4. In the discovery, rs439401 displayed high LD (D'>0.9) with APOE*3, but in rarer instances was observed in-phase with APOE*4, thereby deviating from the expected LD structure (Table S5). APOE filtering approach 1: Rs439401 shows variable association with Alzheimer's disease risk Primary case-control findings in APOE*4/4 carriers in the discovery showed that rs439401 displayed a strong, Fig. 1 Schematic overview of the study design and two APOE*2/3/4 filtering approaches protective, and significant effect on case-control status (Table 2). It displayed similar protective effect sizes in EADI and Rotterdam replication samples, but was risk increasing in EADB, and did not reach significance in any replication sample. When in-phase with APOE*4 in APOE*3/4 (WT-HOM) stratified analyses, rs439401 showed a protective significant effect in the discovery, but variable non-significant results in the replication samples ( Table 2). In contrast, in the discovery, rs439401 did not associate with AD risk in APOE*3/4 (additive model) or 3/3 stratified analyses (Table S6), nor in the full sample analysis (odds ratio = 0.99; 95% confidence interval = [0.95, 1.03], P-value = 0.61). Because of the use of a mega-analysis design that does not adjust for cohort, there may still be concern for potential cohort biases. Therefore, as a sensitivity Table 1 Sample demographics for association analyses with Alzheimer's disease case-control status analysis, we re-evaluated the case-control discovery findings, now adjusting for cohort or cohort/array/center (Fig. S5). These analyses indicated diminished significances, but effect sizes remained comparable and rs439401 remained strongly significant in APOE*4/4 carriers. Robustness assessment: limitations to APOE filtering approach 1 After the initial analyses, we assessed the robustness of the primary discovery findings. This appeared particularly relevant considering the very low frequency of rs439401 carriers in APOE*4/4 controls in EADB versus other cohorts, suggesting potential biases in the controls across the cohorts. The concordance rate of rs439401 from duplicate samples across microarrays and WGS (99.97%) supported genotype reliability (Table S7). Similarly, the variant appeared confidently called from the EADB microarray intensity data (Fig. S4). Overall, we concluded there were no specific genotyping issues for rs439401. Another important consideration is that some error rate is expected for the different direct APOE genotyping methods used across cohorts. Overall, the reliability of the APOE*4 genotype may thus be of concern especially when considering the rare APOE*4-rs439401 haplotype. After assessing all APOE*4/4-rs439401 carriers, it was apparent that one cohort, MIRAGE, contributed a large amount of APOE*4/4-rs439401 controls for which APOE status was available only from provided/demographic sources ( Fig. 2A, Table S8). We then assessed the concordance rate between provided and imputed APOE genotypes across all respective cohorts and observed that MIRAGE displayed the lowest concordance rate of all cohorts included in the discovery analyses (Fig. 2B), despite comparably high imputation scores for rs429358 and rs7412 to other cohorts (Table S9). Overall, this supported concern for the APOE*4/4-rs439401 controls from MIRAGE. Extending on the above considerations, we assessed discordance rates between imputed and provided APOE for different strata (Fig. 2C, Table S10). Importantly, while the discordance rate was only 4.3% in the full sample, it increased to 7.2% in APOE*4/4 cases, further increased to 16.1% in APOE*4/4 controls, and then drastically increased to 47.4% in APOE*4/4-rs439401 carrier cases and 85.7% in APOE*4/4-rs439401 carrier controls. While our a priori assumption for approach 1 reasoned that imputed APOE may be discordant with provided APOE in case of subjects with rare haplotypes (e.g., APOE*4/4-rs439401 carriers), the observation that this discordance was 2-fold higher in controls compared to cases would not be expected. Rather, it more likely indicates that a miscall of the APOE genotype was true in at least some of these individuals. To better understand these observations, we performed simulation studies using different type I and type II error rates (0-5%) for APOE*4/4 genotyping and observed that APOE*4/4 controls were more likely than APOE*4/4 cases to not actually be APOE*4/4 carriers (Fig. S6-7). This was the result of the low frequency of APOE*4/4 controls and the strong case-control imbalance in APOE*4/4 carriers. Overall, this supported concern for the validity of approach 1. We then used the recently released new ADSP WGS and WES data, which now cover additional subjects that are duplicated on SNP array samples included in our discovery analyses (N = 3644 as determined by identity-by-descent). We assessed the APOE genotype calls from the novel WES/WGS data and observed that three APOE*4/4-rs439401 control subjects (not from the MIRAGE cohort) in the prior discovery samples were in fact APOE*3/3 or APOE*3/4 carriers, which was also the imputed APOE genotype (Table S8). Overall, this again raised concern about the validity of approach 1. In sum, these additional checks for robustness of the findings suggested problems with APOE genotype reliability in subjects with APOE*4-rs439401 haplotypes and APOE*4/4 carriers overall, indicating a limitation to the first (conventional) APOE filtering approach. In a final check, we observed that despite good concordance between provided and WGS APOE (99.1%), imputed and WGS APOE was more concordant (97.2%) than imputed and provided APOE (95.7%), indicating that at least in some subjects imputed APOE was likely more correct than provided APOE (Table S10). APOE filtering approach 2: Rs439401 shows no association with Alzheimer's disease risk In light of the identified APOE reliability limitations, we extended approach 1 to filter out any subjects that did not have WGS/WES APOE and at the same time were discordant for provided and (high-quality) imputed APOE. We also filtered out any discordant APOE calls with regard to the new ADSP WES/WGS data since this information was available (in case of APOE*4-rs439401 carriers, this overlapped with samples where provided and imputed APOE were discordant). We then applied this to the discovery samples and reran analyses. Exclusion of subjects with discordant APOE status with the newly released ASDP WES/WGS data removed 61 (out of 12,367) subjects from the SNP-array samples. Further applying the new APOE filter excluded 632 (out of 12,753 considered) subjects from the discovery SNP-array samples. APOE*4-rs439401 carrier frequencies dropped substantially, particularly in controls, and became more consistent with those observed in the haplotype reference consortium (Fig. 3A). Case-control association analyses now indicated no effects for APOE*4-rs439401 carriers (Table 3 and Fig. 3B, C) and still no effect in full sample analyses (odds ratio = 1.00; 95% confidence interval = [0.96, 1.05], P-value = 0.93). In sum, approach 2 produced results that were more realistic in terms of expected linkage structure and more consistent with the lack of significant replication findings. Discussion Our results demonstrate that the filtering criteria for APOE*2/3/4 genotypes can heavily impact association finding for variants that exert their effect in conjunction with APOE*2/3/4. Specifically, we used the APOE sQTL variant rs439401 to illustrate this point. Using more conventional filtering criteria regarding APOE genotypes (approach 1), we showed that, when in-phase with APOE*4, rs439401 was variably associated with protective effects on AD case-control status. However, when assessing the reliability of APOE*2/3/4 genotypes with more scrutiny and applying a novel filter to increase certainty of the APOE genotypes (approach 2), we observed that all significant effects were lost. The findings and methodology presented here are thus of high relevance to guide future research into the APOE locus. Specifically, we propose that our approach 2 can serve as a consensus APOE genotyping approach for future studies, namely, to prioritize first WGS/WES APOE*2/3/4 genotypes if Table S8). The red arrow indicates that a large fraction of control rs439401 carriers was contributed by MIRAGE. B Concordance rates between provided and imputed APOE per cohort (additional data in Table S9). The red arrow indicates that MIRAGE had the lowest concordance rate, suggesting potential limitations with its provided APOE data that could explain observations in A. C Concordance rates between provided and imputed APOE for the discovery sample, considering multiple strata (additional data in Table S10). APOE*4/4 strata considered provided APOE*4/4 genotypes after applying APOE filtering approach 1. Note decreased concordance in APOE*4/4 controls compared to cases. Note strongly decreased concordance for rs439401 carriers, specifically controls. Simulations confirmed that APOE*4/4 controls are more likely than cases to not actually be APOE*4/4 carriers (cf. Fig. S6-7). Abbreviations: CN, cognitively normal; AD, Alzheimer's disease; OR available (and if only either rs429358 or rs7412 is available from WGS/WES, to use those genotype data to verify the provided/demographic APOE*2/3/4 genotypes); second to use provided/demographic APOE*2/3/4 genotypes; and third, in subjects without WGS/WES information, to exclude those for whom the provided/ demographic and imputed (R2>0.8) APOE*2/3/4 genotypes are discordant. Another important step to ensure the highest quality of APOE*2/3/4 genotypes is to verify and harmonize this information across available duplicate samples. The rs439401 variant considered in the current study has previously been investigated with regard to AD risk in different contexts and using variable strategies and study designs [8][9][10][11]13]. Our analyses however considered a substantially larger sample size, essentially incorporating most European ancestry AD cohorts included in prior studies, specifically focused on evaluating effects Fig. 3 Overview of rs439401 frequencies and case-control association findings, comparing APOE filtering approach 1 to approach 2. A Carrier frequencies across both approaches for APOE*4/4 and APOE*3/4 WT vs HOM groups, as well as in the Haplotype reference consortium v1.1 (HRC). Note decreased frequencies for rs439401 in approach 2 that appear concordant with the HRC. B, C Overview of association findings for all evaluated strata, comparing B approach 1 to C approach 2. Significant effects are denoted by an asterisk (*). Error bars show 95% confidence intervals. Note loss of significant effects in approach 2 stratified to respective APOE genotypes, and tested only directly genotyped variants. Further, up-to-date genotype and phenotype data for a large set of AD cohorts was jointly harmonized to compose a parsimonious discovery sample. Non-European ancestries were not investigated here owing to the paucity of publicly available data. When compared to similar prior studies [6, 13-15], our discovery group was larger and we incorporated three large replication cohorts. Furthermore, through the implementation of linear mixed modeling and cross-sample harmonization, we were able to increase the power and specificity for variant discovery, while additionally verifying genotype reliability across nearly 4000 duplicate samples. In sum, our analyses should provide a robust assessment of the presented APOE filtering approaches and rs439401's association with AD risk. A recent study, using samples largely overlapping with the current discovery (but smaller in size) and an APOE filtering approach similar to our approach 1, evaluated the association of variants on the larger APOE locus with AD risk in APOE*4/4 carriers and did not identify the strong association of rs439401 that we observed in approach 1 [13]. Beyond differences in sample size and harmonization, the latter study adjusted models by study/cohort and made use of imputed genotypes. We specifically decided in primary analyses not to adjust for cohort, as we reasoned that this may inadvertently diminish power given variable cohort sizes and carrier distributions, especially in APOE*4/4 carriers. We further reasoned that through our extensive phenotype/ genotype harmonization and the use of a mixed model mega-analysis design, which may capture some latent cohort effects, there was less concern for potential cohort bias. Additionally, given the complex LD structure of the APOE locus, we were concerned about the reliability of imputation and focused only on directly called genotypes. A similar limitation regarding imputation was recognized by the authors of the prior study [13]. These differences likely explain why rs439401 was not observed in their study. Regardless of our considerations and of cohort adjustment, we determined that the APOE filtering criteria were the most relevant factor for variable rs439401 association findings. One important insight from our study was that subjects, particularly controls, with a provided APOE*4/4 genotype had a higher probability of discordance between their imputed and provided genotype than did subjects in the full sample. Such biases are, however, not limited to APOE*4/4 carriers. The six APOE genotypes (*2/2, 2/3, 3/3, 2/4, 3/4, 4/4) show large differences in numbers of carriers and case-control ratios, owing to the allele frequencies of rs429358/rs7412 and their effect on AD risk. As a result, the different APOE genotypes will be expected to have different concordance rates between true and observed APOE genotypes. We observed varying concordance between imputed and provided APOE across the six APOE genotypes, with particularly lower concordance rates in APOE*2 carriers (Fig. S8). Just as the APOE*4/4 provided genotype was most likely to be incorrect here in controls (a phenotype for which APOE*4/4 is a particularly rare genotype), the APOE*2/2 genotype is more likely to be incorrect in cases (a phenotype for which APOE*2/2 is a particularly rare genotype). The proposed APOE genotype filter will therefore also be specifically relevant for studies focusing on APOE*2. Our study highlights several important considerations for further work on the APOE locus. Most notably, we illustrate how APOE genotype filtering criteria can strongly impact association findings for variants in the APOE locus, especially when studying haplotypes or interaction effects with APOE*2/3/4. The same will hold true when considering non-local variants in, for instance, a genome-wide association study of AD in APOE*4/4 subjects, or when aiming to disentangle genetic interaction effects with APOE*2/3/4. Based on our observations, we suggest that future studies consider implementing the methodology that we proposed in approach 2 and subject their assessment of APOE genotypes to extensive scrutiny. The limitations observed for APOE*2/3/4 genotype reliability also emphasize that next-generation sequencing data will be crucial to interrogate the APOE locus with higher confidence and to ensure that putative rare haplotypes are not missed because of the need for sample filtering in SNP array data. Lastly, in order to have higher confidence in local haplotypes, long read sequencing approaches will additionally be crucial to help disentangle the local haplotype structure on APOE with regard to AD. Limitations One limitation of our proposed approach is that it relies on the availability of high-quality imputed genotypes for rs429358 and rs7412, as well as careful phenotype/genotype harmonization across multiple data sources, which may not always be feasible for different research groups. Nonetheless, our findings show that efforts to increase APOE*2/3/4 genotype reliability should be pursued and that collaborative large-scale AD harmonization initiatives should consider this as an important focus. Furthermore, our approach may be considered to be highly conservative when excluding subjects for which the imputed and provided APOE*2/3/4 genotypes are discordant, since some of the imputed APOE*2/3/4 genotypes may in fact be the correct ones. Future studies may thus also consider retaining those subjects, using their imputed APOE*2/3/4 genotypes. Lastly, we propose to prioritize WES/WGS APOE*2/3/4 genotypes given the high quality and reliability of these sequencing technologies. However, as detailed in the supplement, careful consideration of genotyping quality and depth, integrated with provided APOE*2/3/4 genotype information, were crucial to maximize APOE*2/3/4 genotype reliability. It will therefore be critical that such information is made readily available and evaluated in future studies. Conclusion We showed that careful consideration of APOE genotype and appropriate sample filtering was crucial to robustly interrogate the role of the APOE locus on AD risk. Our study presents a novel APOE filtering approach and provides important guidelines for research in this area, as well as for elucidating genetic interaction effects with APOE*2/3/4. Acknowledgements NCRAD. Biological samples used in this study were stored at study investigators' institutions and at the National Cell Repository for Alzheimer's Disease (NCRAD) at Indiana University, which receives government support under a cooperative agreement grant (U24 AG21886) awarded by the National Institute on Aging (NIA). We thank contributors who collected samples used in this study, as well as patients and their families, whose help and participation made this work possible. EADB. We thank the many study participants, researchers, and staff for collecting and contributing to the data, the high-performance computing service at the University of Lille, and the staff at CEA-CNRGH for their help with sample preparation and genotyping, and excellent technical assistance. We thank Antonio Pardinas for his help. This research was conducted using the UK Biobank resource (application number: 61054). Genotyping of the Dutch case-control samples was performed in the context of EADB (European Alzheimer & Dementia biobank) funded by the JPco-fuND FP-829-029 (ZonMW project number #733051061). This research is performed by using data from the Parelsnoer Institute an initiative of the Dutch Federation of University Medical Centres (www. parel snoer. org). 100-Plus study: We are grateful for the collaborative efforts of all participating centenarians and their family members and/or relations. We thank the Netherlands Brain Bank for supplying DNA for genotyping. This work was supported by Stichting AlzheimerNederland (WE09.2014-03), Stichting Diorapthe, Horstingstuit foundation, Memorabel (ZonMW project number #733050814, #733050512) and Stichting VUmcFonds. Additional support for EADB cohorts: WF, SL, and HH are recipients of ABOARD, a public-private partnership receiving funding from ZonMW (#73305095007) and Health~Holland, Topsector Life Sciences & Health (PPP-allowance; #LSHM20106). The DELCODE study was funded by the German Center for Neurodegenerative Diseases (Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE)), reference number BN012. Gra@ce. The Genome Research @ Fundació ACE project (GR@ACE) is supported by Grifols SA, Fundación bancaria 'La Caixa' , Fundació ACE, and CIBERNED (Centro de Investigación Biomédica en Red Enfermedades Neurodegenerativas (Program 1, Alzheimer Disease to MB and AR)). A.R. and M.B. receive support from the European Union/EFPIA Innovative Medicines Initiative Joint undertaking ADAPTED and MOPEAD projects (grant numbers 115975 and 115985, respectively). M.B. and A.R. are also supported by national grants PI13/02434, PI16/01861, PI17/01474 and PI19/01240. Acción Estratégica en Salud is integrated into the Spanish National R + D + I Plan and funded by ISCIII (Instituto de Salud Carlos III)-Subdirección General de Evaluación and the Fondo Europeo de Desarrollo Regional (FEDER-'Una manera de hacer Europa'). Some control samples and data from patients included in this study were provided in part by the National DNA Bank Carlos III (www. banco adn. org, University of Salamanca, Spain) and Hospital Universitario Virgen de Valme (Sevilla, Spain); they were processed following standard operating procedures with the appropriate approval of the Ethical and Scientific Committee. The present work has been performed as part of the doctoral program of I. de Rojas at the Universitat de Barcelona (Barcelona, Spain). EADI. This work has been developed and supported by the LABEX (laboratory of excellence program investment for the future) DISTALZ grant (Development of Innovative Strategies for a Transdisciplinary approach to ALZheimer's disease) including funding from MEL (Metropole européenne de Lille), ERDF (European Regional Development Fund), and Conseil Régional Nord Pas de Calais. This work was supported by INSERM, the National Foundation for Alzheimer's disease and related disorders, the Institut Pasteur de Lille and the Centre National de Recherche en Génomique Humaine, CEA, the JPND PERADES, the Laboratory of Excellence GENMED (Medical Genomics) grant no. ANR-10-LABX-0013 is managed by the National Research Agency (ANR) part of the Investment for the Future program, and the FP7 AgedBrainSysBio. The Three-City Study was performed as part of collaboration between the Institut National de la Santé et de la Recherche Médicale (Inserm), the Victor Segalen Bordeaux II University and Sanofi-Synthélabo. The Fondation pour la Recherche Médicale funded the preparation and initiation of the study. The 3C Study was also funded by the Caisse Nationale Maladie des Travailleurs Salariés, Direction Générale de la Santé, MGEN, Institut de la Longévité, Agence Française de Sécurité Sanitaire des Produits de Santé, the Aquitaine and Bourgogne Regional Councils, Agence Nationale de la Recherche, ANR supported the COGINUT and COVADIS projects. Fondation de France and the joint French Ministry of Research/INSERM "Cohortes et collections de données biologiques" programme. Lille Génopôle received an unconditional grant from Eisai. The Three-city biological bank was developed and maintained by the laboratory for genomic analysis LAG-BRC -Institut Pasteur de Lille. Rotterdam Study. The Rotterdam Study is funded by Erasmus Medical Center and Erasmus University, Rotterdam, Netherlands Organization for the Health Research and Development (ZonMw), the Research Institute for Diseases in the Elderly (RIDE), the Ministry of Education, Culture and Science, the Ministry for Health, Welfare and Sports, the European Commission (DG XII), and the Municipality of Rotterdam. The authors are grateful to the study participants, the staff from the Rotterdam Study, and the participating general practitioners and pharmacists. The generation and management of GWAS genotype data for the Rotterdam Study (RS-I, RS-II, RSIII) were executed by the Human
8,278.2
2021-10-26T00:00:00.000
[ "Medicine", "Biology" ]
Kinetic Field Theory: Cosmic Structure Formation and Fluctuation-Dissipation Relations Building upon the recently developed formalism of Kinetic Field Theory (KFT) for cosmic structure formation by Bartelmann et al., we investigate a kinematic relationship between diffusion and gravitational interactions in cosmic structure formation. In the first part of this work we explain how the process of structure formation in KFT can be separated into three processes, particle diffusion, the accumulation of structure due to initial momentum correlations and interactions relative to the inertial motion of particles. We study these processes by examining the time derivative of the non-linear density power spectrum in the Born approximation. We observe that diffusion and accumulation are delicately balanced because of the Gaussian form of the initial conditions, and that the net diffusion, resulting from adding these two counteracting contributions, approaches the contributions from the interactions in amplitude over time. This hints at a kinematic relation between diffusion and interactions in KFT. Indeed, in the second part, we show that the response of the system to arbitrary gradient forces is directly related to the evolution of particle diffusion in the form of kinematic fluctuation-dissipation relations (FDRs). This result is independent of the interaction potential. We show that this relationship roots in a time-reversal symmetry of the underlying generating functional. Furthermore, our studies demonstrate how FDRs originating from purely kinematic arguments can be used in theories far from equilibrium. Introduction Recently, Bartelmann et al. [1,2,3] developed a Kinetic Field Theory (KFT) to treat cosmic structure formation based on methods introduced first by Das and Mazenko in [4,5,6,7] and structurally similar to non-equilibrium quantum field theory. This formalism mirrors the approach of N-body simulations following particles in phase space and, thus, avoiding difficulties with shell crossing ubiquitous in conventional approaches to cosmic structure formation. The canonical, N-particle ensemble considered is initially correlated in phase space and subject to the Hamiltonian equations of motion. The central object of the formalism is a generating functional containing the complete statistical information about the initial conditions and the propagation of the particles. Correlators, e.g. the density power spectrum, can be extracted from the generating functional using functional derivatives. Gravitational interactions between particles are treated perturbatively using a response function in the spirit of Martin-Siggia-Rose theory [8,9] or can be approximated in the Born approximation [3]. In [1] it was demonstrated that already at first order in the interactions the non-linear power spectrum is in good agreement with N-body simulations down to remarkably small scales. In [2] we have shown that our formalism allows to take the full non-linear coupling of free-streaming trajectories due to initial momentum correlations into account and that the free generating functional factorizes into a single numerically tractable integral of standard form. In a separate analysis [3] we show that averaging the interactions in the Born approximation allows for a computation of the non-linear power spectrum which is in remarkable agreement with N-body simulations with relative differences being of order ≈ 15 % up to a wave number of k ≤ 10 h Mpc −1 for a scale factor of a = 1. With the present analysis, we wish to prove that diffusion and gravitational interactions are kinematically related in KFT. As a first step we consider the time evolution of the density power spectrum approximating gravitational interactions in the Born approximation as in [3]. One major advantage of KFT over N-body simulations is that we can compute analytic expressions for density correlation functions and, in this way, the study of time derivatives enables us to separate three fundamental processes in structure formation, particle diffusion, the accumulation of structure due to the initial momentum correlations and gravitational interactions. Our analysis shows that diffusion and accumulation are delicately balanced, demonstrating the eminent role of Gaussian initial conditions. We observe that the resulting net diffusion seems to be closely related to the contributions from interactions. This suggests that the processes of diffusion and interactions are kinematically related to each other. We discuss the reliability of the Born approximation for our purposes by comparing the time derivative of the non-linear power spectrum in the Born approximations with N-body simulations. Motivated by this result, we show in the second part that the time evolution of particle diffusion is related to the response of the system to an arbitrary gradient force by kinematic fluctuation-dissipation relations (FDRs). This relationship is a consequence of a timereversal symmetry of the generating functional which respects the Gaussian form of the initial conditions. Although these FDRs are purely kinematic statements, our analysis shows that they can give insight into processes far from equilibrium. To our knowledge, this is a novel application of FDRs. This article begins with an introduction into KFT in Section 2, which summarizes the main results from the previous works [1,2,3] and introduces our notation and methods. We then study in Section 3 the time derivative of the non-linear density power spectrum and discuss FDRs in Section 4. Finally we conclude in Section 5 and give an outlook into future applications and relevance of FDRs for cosmic structure formation. Kinetic Field Theory for Cosmic Structure Formation In this section we review the formalism of the kinetic field theory recently developed in [1] and continued in [2] and [3]. This serves as an introduction to our notation and methods. Initially correlated Hamiltonian system We study a Hamiltonian system of N classical particles with identical mass, which we set to unity for simplicity. The individual particles are described by their phase-space coordinates x j = ( q j , p j ) ⊤ , where the index j = 1, . . . , N denotes the particle number. Introducing the N-dimensional unit vector e j in j-direction, we collect the N phase-space coordinates x j into a phase-space tensor: where summation over j is implied. In the following, bold-faced symbols always denote tensors combining contributions from all N particles. We define a scalar product between two such tenors by: The unit-mass particles are subject to the Hamiltonian equations of motion, which we sometimes write schematically as E(x) = 0. Using the linear growth factor D + − D (i) + as time coordinate, the Hamiltonian of our system in expanding space is given by (see [10]): where g is normalised to 1 at the initial time, a is the cosmological scale factor, H is the Hubble function and v is the Newtonian potential satisfying the Poisson equation with density contrast δ. We assume the particles to be initially correlated in phase space. Every realization x (i) of the initial conditions has a probability described by a phase-space distribution P(x (i) ). Under the standard cosmological assumptions that the initial velocity is the gradient of a Gaussian random velocity field p = ∇ψ and that the initial particle distribution obeys the continuity equation δ = − ∇ 2 ψ, the initial phase-space distribution is completely determined by the initial power spectrum and has the Gaussian form (see [1] for a careful derivation): The factor C(p (i) ), which additionally appears in [1] and describes initial density correlations, is here assumed to be unity. This is a reasonable assumption if we are interested in the late evolution of cosmic structures where the initial density correlations become subdominant compared to initial momentum correlations. The initial momentum correlations are given by the initial density power-spectrum P δ (k): which defines the ( j, k)-component of the matrix C pp appearing in (5). The C −1 p j p k in (5) are then defined as the ( j, k)-component of the inverse matrix C −1 pp . Furthermore, we define σ 2 1 3 ½ 3 ≔ C p j p j as the initial momentum variance. We can already see from equations (5) and (6) that our theory will contain two competing processes. On the one side there is particle diffusion due to the initial momentum variance σ 2 1 /3. This diffusion should not be confused with thermal diffusion, but should rather be seen as an ensemble effect: The momentum of each particle seen on its own has the variance σ 2 1 when averaging over all realisations of the initial conditions. Every single realisation of the initial conditions has a completely deterministic velocity field without any local variance, i.e. there is no 'thermal' component. Furthermore, the conditional probability C p j p k takes into account that the momenta of any two particles are not independent of each other, but depend on their initial distance | q (i) j − q (i) k | leading to the accumulation of structure already in the free theory. Generating functional It was shown in [1] that the entire statistical information on the system is encoded in a generating functional in the spirit of Martin-Siggia-Rose (MSR) theory [8,9]: where we defined the MSR-action This generating functional averages over all possible initial configurations according to the phase-space measure dΓ = dx (i) P(x (i) ), which represents an ensemble average. We introduced an MSR-field χ as the Fourier conjugate to the equations of motion (integration over χ gives a functional delta distribution ensuring that the equations of motion with an auxiliary source field K are satisfied). The auxiliary generator fields K and J are introduced to allow computing correlation functions through functional derivatives of the generating functional: This gives a physical meaning to the field χ as a measure for the response of the system to an external force K. We define the free generating functional Z 0 [J, K] by replacing the full equations of motion by the free equations of motion in the MSR-action (8). If we denote the solution to the equations of motion with auxiliary source (E + K = 0) byx, or equivalentlyx 0 in the free case, the generating functionals take the form: Usually one is interested in collective observables like the particle density rather than in all the microscopic degrees of freedom described by x. The statistical information about the density in Fourier space can be extracted from the generating functional with the density operator:Φ whereΦ ρ j is a one-particle operator. In the following we abbreviate the arguments of Fourierspace operators by 1 ≔ t 1 , k 1 and −1 ≔ t 1 , − k 1 . We also introduce a collective response field B combining the microscopic response field χ with the gradient of a density field in Fourier space (i k 1 ρ(1)), thus describing the reaction of the system to a gradient force: The one-particle operator for this field is then given by: These collective-field operators enable us to write the full generating functional in terms of the free generating functional: where v is the Fourier-transform of the interaction potential. Density correlators and interactions The aim of KFT is to compute cosmological density correlators by applying the density operators (11) to the generating functional: The application of n single-particle density operators to Z[J, K] leads to the translation J → J + L, and thus: where the shift tensor L is given by: Density correlators are thus given by Z[L, 0]. The full generating functional is, however, not tractable in an analytic fashion. The result (14) allows for two different ways of computing density correlators in KFT. We can either apply the density operators to the free generating functional and include the interactions by expanding the exponential in (14), or we approximate the full solutionsx appropriately and work with the full generating functional. We briefly discuss both methods in the following. If we want to work with the free generating functional we have to define at first what we mean by 'free'. If we decompose the Hamiltonian (3) into a free and an interacting part, H = H 0 + H I : then the solution to the free equation of motion with auxiliary source K is given by: where we neglected the K q j since they are not acted upon by the response operator (13). We also defined the propagator components ‡: Since t = D + − D (i) + this choice of 'free motion' is equivalent to the Zeldovich approximation and thus contains already part of the interactions. The 'remainder' of the interactions is introduced perturbatively via the interaction operator (14). An application of a one-particle response operatorb j r on the generating functional Z 0 [L, K] gives the response factor b j r : At order m in the interaction operator (14) we have to apply m response operators and m additional density operators. So the most general object we have to consider in this scope is the correlator of m response fields and l = m + n density fields computed from Z 0 [J, K]: It remains to compute the density correlator Z 0 [L, 0] from (10): where we introduced the spatial and momentum shift tensors: ‡ In this work, the letter G denotes correlation functions, see (15), as well as Green's functions, however their different index structure should prevent confusion. and we integrated over the initial momenta. The momentum correlations C p j p k depend only on the relative particle separations r jk = q (i) j − q (i) k . This allows us to write Z 0 [L, 0] in the form: where we introduced the one-and two-particle factors P 1 (L p j ) and P 2 (L) as As already discussed in Section 2.1 the free theory contains two competing processes, diffusion due to the initial momentum variance σ 2 1 /3 of every particle seen on its own and accumulation of structure due to the conditional probability C p j p k between the momenta of different particles. This becomes more explicit in (26), where the one-particle factors P 1 L p j describe the diffusion of particle j and the two-particle factor P 2 (L) describes the accumulation of structure due to the conditional probability C p j p k between two particles. We now discuss an alternative approach presented in detail in [3] approximating the solutions to the full equations of motion in the Born approximation. The full equations of motion following from the Hamiltonian (3) are given by: We again want to write the solution of this equation in the following form: In [10] a propagator of the form: was proven to be particularly useful for the study of cosmic structure formation as it contains an even larger part of the interactions than the Zel'dovich propagator. Substituting the solution (29) with the improved Zel'dovich propagator (30) into the equation of motion (28) shows that the force kernel f j (t) in (29) has the form: Substituting the solution (29) into the full generating functional (10) gives the density correlations: It was shown in [3] that the function F(t, t ′ ) ≔ j F j (t, t ′ ) in the case of the power spectrum, i.e. n = 2, can be averaged and approximated in the spirit of the Born approximation: where with the initial density power spectrum P (i) δ . In [3] it is shown that an approximation of the full power spectrum in this approach is in remarkable agreement with the power spectrum from N-body simulations, with relative deviations being of order ≈ 15 % up to a wavenumber of k ≤ 10 h Mpc −1 . Factorization We have not explained yet how Z 0 [L, 0] can be treated. We have shown in [2] that, by introducing the internal wave vectors k ab for a > b and a = 3, . . . , l, the two-particle term can be factorized into a single, numerically tractable integral of standard form: where we defined the wave vectors k j1 for j = 2, . . . , l as: Furthermore, we introduced the Dirac-delta term: and the function The exponent inside the parentheses is a decomposed version of the quadratic form L ⊤ p j C p j p k L p k appearing in the two-particle term (27): where we defined λ jk and λ ⊥ jk implicitly and used the projectors parallel π jk ≔k jk ⊗k jk and perpendicular π ⊥ jk ≔ I 3 − π jk to the unit vectork jk in the direction of k jk . The function P jk acquires an intuitive meaning when considering its limit for large scales or early times (g 2 qp (t, 0)k 2 jk ≪ 1). It was proven in [2] that, in this limit, P jk is linear in the initial power spectrum: For the free two-point function, i.e. the free power spectrum, λ 21 = −1 showing that, in this case, P 21 reduces to the linearly evolved power spectrum on large scales or early times. Thus, we can interpret P jk as a generalization of the linearly evolved power spectrum which takes the full non-linear coupling of free trajectories by initial momentum correlations into account. Crucially, the function P jk can be quickly evaluated numerically using e.g. a Levin collocation scheme [2,11,12,13]. Time derivatives of correlation functions We have seen in the last Section that in the free theory we have two physical processes governing cosmic structure formation, i.e. diffusion due to the initial momentum variance σ 2 1 /3 and accumulation of structure due to the initial conditional probability C p j p k between the momenta of two different particles. Including gravitational interactions into the theory adds a third process to cosmic structure formation, viz. the mutual gravitational attraction between two particles. In this section we treat the interactions in the fashion of the Born approximation as studied in [3] and summarised in the last Section. So the non-linearly evolved density power spectrum is given by: In this Section, the propagator g qp (t, t ′ ) is always assumed to be the improved Zel'dovich propagator (30). We want to understand the effect that all three processes have on cosmic structure formation. Since they appear as factors in (42), this is best done by examining the time derivative of the non-linear power spectrum. The time derivative of the power spectrum in the Born approximation (42) is straightforward to compute: where the operators D (1) t , D (2) t and D I t are defined as time derivative operators acting only on the 1-particle factors, i.e. e Q D , the 2-particle factor, i.e. P 21 , and the interaction factor e − F (t) respectively. The action of these time derivative operators on the non-linear power spectrum can be computed to be: where the derivative of the interaction factor is given by: Note that the derivative of the boundary of the integral vanishes as F(t, t) = 0 since g qp (t, t) = 0. The derivative of the integrand F(t, t ′ ) does not vanish and becomes: In most of the following discussion we neglect the k-independent part of this term as it simply describes an overall rescaling of the spectrum and we are mostly interested in k-dependent features. For this purpose, we introduce the notationsD (I) t andD (I) t as differential operators acting only on the k-dependent and k-independent part of F(t, t ′ ) respectively. Similarly, the two-particle term (45) contains the linear evolution of the power spectrum as well as additional contributions due to the fact that we consider the full non-linear coupling of free trajectories by initial momentum correlations, see (41) and the discussion thereafter. In the following we are mostly interested in the latter part. Thus, we define: in this way we have subtracted the linear evolution times the Born factor e − F (t) . Balance between diffusion and accumulation of structure In a first step we want to examine the one-and two-particle contributions, (44) and (49), describing the diffusion and accumulation of structure due to the initial conditions. Both contributions are depicted in Fig. 1 as well as their sum and for comparison also the time derivative of the linearly evolved power spectrum g 2 qp (t, 0)P (i) δ . We divided the one-and twoparticle contributions by the Born factor e − F(t) in order to make the curves independent of any shortcomings of the Born approximation on small scales. In this way the results are exact at any scale and it makes sense to depict them up to a wave number of k = 100 h Mpc −1 . We observe a remarkable balance between the one-and two-particle contributions on small scales as their sum is several orders of magnitude smaller than both terms individually. This balance originates from the Gaussian form of the cosmological initial conditions and is only possible because our formalism allows to take the full non-linear coupling of free trajectories due to the initial momentum correlations into account -accounting only for part of them would lead to a strong domination of diffusion. Any small violation of this balance would lead to either very strong diffusion, thus preventing any structure formation, or much stronger structure formation than currently observed (many orders of magnitude larger than the linear evolution). An interesting consecutive question for a future study is whether this observation could constrain initial non-Gaussianities. The sum of the one-and two-particle contributions can be positive or negative, depending on the time we are looking at, see Fig. 2 for its behaviour at different times. While we can see at early times that this sum leads to some structure formation on small scales, diffusion dominates at late times where it results in a net diffusion effect. Interactions and net diffusion We now want to include the contributions from interactions (46) into the picture. The time derivative of the power spectrum in the Born approximation, see (43), is given by the net diffusion, i.e. the sum of the one-and two-particle contributions, now including the Born factor, the linear evolution times the Born factor, i.e. the part of the two-particle contributions which we neglected so far, see (49), and the contributions from interactionsD (I) t P nl andD (I) t P nl . We depict these terms individually as well as their sum in Fig. 3. Also shown is the linear evolution for comparison. We observe that the time derivative of the k-independent part of the Born approximation is negligible on all scales. The time derivative of the linear power spectrum times the Born factor ensures that on large scales the linear evolution is reproduced and is also important for the non-linear evolution on small scales. The time derivative of the k-dependent part of the Born factor (D (I) t P nl ) follows the net diffusion closely on large scales, but has the opposite sign, on intermediate scalesD (I) t P nl is bigger than the net diffusion and on small scales these two contributions again come quite close to each other in amplitude. Before we can draw any further conclusions, it is important to check the reliability of the Born approximation by comparing our results with the time derivative of the power spectrum obtained from N-body simulations [14]. We depict the result from the Born approximation (43) in comparison to N-body simulations in Fig. 4 for three choices of the scale factor a = 1, 0.3, 0.1. In Fig. 5 we plot the relative difference between our results and N-body simulations for the same choices of the scale factor as well as the scales factors a = 0.9, 0.8 which we discuss in more detail later on. At first we discuss the results for the scale factor a = 1 in Figs. 4 and 5. When comparing the results in the Born approximation with simulations, we observe a qualitatively similar behaviour as in [3] where the non-linear power spectrum (42) was compared with simulations. The Born approximation is able to describe the non-linear structures up to a scale of k ∼ 5 h Mpc −1 to remarkable accuracy with the absolute value of the relative error being on average of the order of ∼ 15 % and never bigger than 25 %. On scales beyond k ∼ 5 h Mpc −1 the Born approximation falls strongly below the results from simulations. Considering the evolution of the relative error in time we see that, as expected, the Born approximation predicts the early evolution (a = 0.1) reasonably well. However, the relative error is of order ∼ 30 % for scale factors between a = 0.3 and a = 0.9 and on large scales the error does not increase monotonically with redshift. The curves for a = 0.3 in Fig. 4 show that the Born approximation fails to predict the correct form for the onset of the non-linear structure at 1 h Mpc −1 k 5 h Mpc −1 . Notice that for k 5 h Mpc −1 and a = 0.3 the magnitude of the relative error decreases again. This contradicts the intuition that a perturbative theory of structure formation should be accurate on linear and mildly non- linear scales, but should fail on highly non-linear scales. However, the Born approximation is a non-perturbative, but approximate approach to cosmic structure formation and it is not clear whether this intuition is reasonable in this case. The Born approximation seems to be especially well suited for the study of the late time non-linear evolution. The time evolution of the relative error needs to be studied in detail together with the reliability and limitations of the Born approximation in a future analysis. For our purposes here it is merely important that the Born approximation at late times (a = 1, 0.9, 0.8) is in agreement with simulations at a level of 30 % on scales up to k ∼ 5 h Mpc −1 . With these caveats in mind we want to analyse the processes of net diffusion D (1) t +D (2) t P nl and the k-dependent part of the interactionsD (I) t more closely. We plot the sum of both contributions relative to the net diffusion in Fig. 6. We observe that the two contributions become significantly closer in amplitude for late times. Notice that this tendency of the two terms to approach each other in amplitude is most notably visible on scales k < 5 h Mpc −1 where the Born approximation is in reasonably good agreement with simulations. On large scales the two contributions seem to follow a close relationship. These observations would need to be considered unnatural unless there is some mechanism relating the amplitude of both processes to each other. This motivates our analysis in the next Section where we indeed find a relation between interactions and diffusion in terms of fluctuation-dissipation relations. Fluctuation-Dissipation Relations In this Section we prove a fundamental connection between diffusion and interactions in terms of fluctuation-dissipation relations. We show that FDRs in KFT relate the process of diffusion with the reaction of the free system to an arbitrary gradient force. In this Section we work in the 'free' theory, where the trajectories of free particles are given by (19) and (20) for K = 0, the propagator is given by the Zel'dovich propagator (21) and the reaction to the system to an arbitrary force K is computed via the response operator (13). This makes the relations independent of the interaction potential. Density autocorrelation In a first step we examine the single-particle density autocorrelation G (0) ρ j ρ j (12) in more detail. In this case the system is effectively a one-particle system and the initial probability distribution (5) reduces to a Maxwellian velocity distribution: Thus, the autocorrelation is purely described by the dissipative one-particle part P 1 (L p j ) and two-particle factors are absent: The Dirac δ-distribution reflects the homogeneity of the system and leads to the following form of the momentum shift vector L p 1 : which is invariant under time-translation in the case of Zel'dovich trajectories. Having determined L p j , the time derivative of the autocorrelation can be computed: where we used ∂ t 1 g qp (t 2 , t 1 ) = −1 for the Zel'dovich propagator. This expression for the time derivative of the autocorrelation becomes most interesting when comparing it with the response function: We can conclude: This relation between the response function and the time derivative of the autocorrelation has exactly the form of a fluctuation-dissipation relation. We have multiplied the response field on the left-hand side of (55) with −i because this is the correctly normalized response field measuring the reaction of the system to two-particle interactions as can be seen in (14), where this factor also appears. Fluctuation-dissipation relations are known to hold for many statistical systems [15]. For example, in classical linear response theory one considers a system in thermal equilibrium and examines the response of an observable O to a small perturbation away from equilibrium. This is measured by the response function χ(t, t ′ ). Using time-translation invariance in thermal equilibrium, the standard textbook result can be derived: This result is remarkable as it tells us that the response of the system to a small perturbation is the same as the evolution of equilibrium correlations. Examination of the equilibrium systems thus gives information on non-equilibrium physics. β is here the inverse temperature, so comparison of (55) and (56) shows that in KFT σ 2 1 /3 assumes the role of the temperature. The FDR in KFT (55) has the same mathematical form as the conventional FDR (56). However, there are some significant physical differences between the relations since KFT is a theory far from equilibrium. We discuss this in Section 4.5 in more detail, but at first we want to generalise the result (55). General density correlators Time translation invariance (TTI) is an important requirement for the validity of FDRs. In the case of the autocorrelation, TTI follows from homogeneity allowing us to write L p j in the form (52). If the structure of the correlator is not that simple, time-translation invariance might not hold for all momentum shift vectors L p j . This leads to deviations from the FDR in its conventional form. The most general density correlator Z 0 [L, 0] has an arbitrary number n j of density fields with particle index j and equivalently for any other particle index: We introduced the last term as an abbreviation which we use in the following. For this correlator, the momentum shift vector L p j has the form: where n j is the number of applied density operators with particle index j. We split L p j into a part respecting TTI and a part which breaks this invariance: where the time variable t 1 is arbitrary so far and the TTI breaking part of L p j is given by: We see that ∆ L p j vanishes for any autocorrelation, where n j is identical with the full number of applied density operators and statistical homogeneity thus ensures n j s=1 k s = 0. In the general case, however, ∆ L p j is non-zero and leads to a correction term in the fluctuation-dissipation relation. Analogous to the steps (53) to (55) we derive the relation: where we assumed t s > t 1 for all 1 < s ≤ n j and t 1 is now the time where we evaluate the response function. We see that breaking TTI leads to a violation of the FDR in its conventional form. The response factor on the left hand side measures the propagation of an inhomogeneity from t 1 to the times t s . However, P 1 L p j describes the diffusion from the initial time to the times t s . Only if the modes satisfy n j s=1 k s = 0, can we neglect the propagation of particle j from the initial time to t 1 . Thus, in general, we have to subtract the diffusion taking place between the initial time and t 1 , which is encoded in ∆ L p j (t 1 ). In order to quantify the diffusion taking place between t 1 and the times t s , we introduce a covariant time derivative being invariant under time translation: where the operator D (1, j) t is defined as a time derivative acting only on the one-particle factor of particle j, i.e. P 1 L p j . In terms of this time derivative, the FDR for a general density correlator, assuming t s > t 1 for all 1 < s ≤ n j , has the form: We see that the diffusion of particle j and the reaction of particle j to an arbitrary external force are kinematically related and this result is not restricted to the autocorrelation (55), but is valid for an arbitrary correlator if we substitute the full time derivative by the covariant time derivative D (1, j) t describing the evolution of diffusion. In the following we show that this connection between the one-particle ensemble diffusion and the response function goes even deeper and is a consequence of the structure of the generating functional and the Gaussian form of the cosmological initial conditions. Time-Reversal Symmetry In statistical field theories, FDRs are typically connected to a time-reversal symmetry of the generating functional [16]. We will show that the same is the case for our kinetic field theory. This gives an alternative way of deriving the FDRs considered above and leads to an easy generalization to higher-order response functions, which will turn out to be related to higherorder time derivatives. First of all we note that the formalism of KFT is originally designed for times later than the initial time t = 0. Hence it is not immediately clear what a time-reversal symmetry means within this formalism. However, in principle there is no need to restrict the formalism to positive times. The solutions to the equations of motion (19) and (20) are also valid for negative times. Thus, we can propagate the particles from their initial conditions backwards in time and calculate correlation functions at negative times by functional derivatives of the generating functional Z[J, K] with respect to the auxiliary fields at negative times: where t, t ′ > 0. Our aim is to find a transformation T of the microscopic fields x and χ which reverses the time coordinate and leaves the generating functional invariant. The time-reversed generating functional is defined as: and in this way contains the statistical information of the system with time-reversed dynamics. The phase-factor with the auxiliary fields is not transformed. Thus, correlation functions are computed in the same way as before, but they are now evaluated with time-reversed dynamics. If the generating functional is invariant under the transformation T , any correlation function will be invariant as well. This can be shown by substituting x → T x and χ → T χ in the path integrals in (65) and using that a time-reversal symmetry has to be its own inverse: In this form, we see that functional derivatives of the time-reversed generating functional give time-reversed correlation functions, but with the original dynamics. Thus, invariance of the generating functional also shows invariance of correlation functions under time reversal. We now aim at finding the explicit form of T . Since the density and response field operators, (11) and (13), act only on the spatial part of J and the momentum part of K, we can restrict ourselves here to the case where the momentum part of J and the spatial part of K vanish. Using time-reversal invariance of the Hamiltonian equations of motion, we prove in Appendix A that the following transformation is a time-reversal symmetry of the free generating functional: T : where the term c j [J, t] is a functional of J and is defined as: It describes the breaking of time-translation invariance similar to the term ∆ L p j in (60). Indeed, if we substitute J → L in (68), as is the case for density correlators, (60) and (68) become completely equivalent. The transformation laws for the microscopic degrees of freedom allow us to derive transformation laws for the macroscopic fields B and ρ. Defining1 ≔ −t 1 , k 1 , the timereversed density and response fields become: with The first term on the right-hand side has a simple interpretation if we consider this term to be embedded into a density correlation. We prove in Appendix B.2 that: where the dots denote an arbitrary number of further density fields. Thus, we can interpret the term ∆B j (1) as the evolution of time-reversed diffusion: In the following two paragraphs we want to build up some intuition for this term. A time-reversal symmetry relates the probability of a path and its time-reversed version. To illustrate this, we consider a single particle of the ensemble and choose one realization of its initial position q, but leave the initial momentum free. The trajectories of the different realizations of the momentum form a double cone as shown in Figure 7 if we propagate the particle also backward in time. At each time the probability to find the particle at some position q is given by a Gaussian with variance σ 2 1 /3g 2 qp (t, 0), which is symmetric in time t q t 1 q p(q, t 1 ) = p(q, −t 1 ) −t 1 Figure 7. Sketch of different realizations (dashed lines) of the trajectory of one particle with fixed initial position propagated also to negative times backwards from the initial conditions. The probability p(q, t 1 ) to find the particle at position q at time t 1 is Gaussian and symmetric under time reversal. reversal. This symmetry is the reason why density correlations are invariant under time reversal, see (69). However, we see in Figure 7 that for positive times diffusion takes place, while for negative times the process of diffusion is reversed. The response field, being the functional Fourier conjugate of the equations of motion, see (8), is sensitive to the direction of time. Thus, the response field is not invariant under time reversal, but the time-reversed diffusion has to be taken into account in the transformation (70). The form (70) of the symmetry together with (73) is reassuringly similar to the timereversal symmetries known from statistical field theory. For example Andreanov et al. [16] consider the Langevin dynamics: with potential V(X) and where the stochastic force η(t) has variance 2T . They find the timereversal symmetry: whereX takes on the role of the MSR-response function equivalent to χ. Comparing with (70) and (73), we see that the temperature plays the same role as the velocity variance σ 2 1 /3. The second term on the right-hand side in (76) describes diffusion analogous to (73). Fluctuation-dissipation relations from the time-reversal symmetry In statistical field theory, the time-reversal symmetry can be used to derive FDRs. For the Langevin dynamics (74) invariance of correlations under the symmetry (76) gives [16]: Due to causality the first term on the right-hand side has to vanish if t > t ′ and we arrive at the fluctuation-dissipation relation: In the following we aim at a similar derivation of the FDRs for KFT using the timereversal symmetry (70). Since the transformation T leaves correlation functions invariant, we can conclude for example: For t 2 > t 1 the left hand-side has to vanish due to causality. Using that the operator D (1, j) t 1 is equivalent to a regular time derivative for the auto-correlation, we arrive at the FDR (55): The time-reversal symmetry not only reproduces the FDR (55), but also gives rise to a whole hierarchy of relations between response functions of arbitrary order and time derivatives of correlation functions. Most generally, the time-reversal symmetry gives us: This proves that response functions of arbitrary order are related to diffusion. Kinematic FDRs far from equilibrium Since KFT is a theory far from equilibrium, there are some significant physical differences between FDRs in KFT and in thermal systems. The physical picture behind the FDR in a thermal system is the following. Particles in a thermal system necessarily interact with each other as this enables the system to distribute its total energy among the degrees of freedom according to equipartition. The interactions between particles lead to the diffusion of correlations, which is described by ∂ t O(t)O(t ′ ) eq in (56). If the system is slightly perturbed away from equilibrium, the energy of the system will be redistributed towards equipartition due to the interactions in the system. This will lead to a dissipation of the perturbation and a relaxation back to equilibrium, which is described by the left hand side of equation (56). Thus, the response of the system χ(t, t ′ ) and the diffusion ∂ t O(t)O(t ′ ) eq have the same physical origin, viz. the momentum transfer between the microscopic degrees of freedom. The FDRs in KFT are derived within the free theory far away from equilibrium, thus the physical picture is different. Within a single realization of the initial conditions, the particle j has a constant momentum p (i) j . However, in the ensemble seen as a whole, the momentum of particle j is random with velocity variance σ 2 1 /3 due to the averaging over all realizations. The solutions to the free equations of motion with inhomogeneity K, see (19), describe the propagation of this random initial momentum in terms of the propagator g qp . However, the propagation of the inhomogeneity is described by g qp as well. The physical origin of the FDRs in KFT could thus be seen as a consequence of Newton's 2nd axiom: The inhomogeneity changes the momentum by an amount which then has to propagate like a momentum. The FDRs in KFT are thus purely kinematic arguments which makes them, however, not less valuable. In our derivation of the time-reversal symmetry we crucially used the Gaussian form of the initial conditions, so these kinematic FDRs seem to rely on the special form of the initial conditions. In thermal systems, FDRs describe the linear response of the system to small departures from equilibrium. In KFT we are dealing with a system far from equilibrium and consider departures from the free theory. Keep in mind that the 'free' theory already contains some interactions since we use the Zel'dovich propagator, cf. (18). Through the time-reversal symmetry we were able to prove a whole hierarchy of higher-order FDRs which means that our interpretation of FDRs is not limited to linear departures from the free theory. To our knowledge this type of kinematic FDRs far from equilibrium is a novel relation and is of significant interest because it can describe systems far away from equilibrium. Conclusion and Outlook In the context of the recently developed formalism of KFT [1,2] we have shown that the process of cosmic structure formation can be split into three processes: Particle diffusion due to the initial momentum variance σ 2 1 /3, accumulation of structure due to the initial conditional probability C p j p k between the momenta of two particles, and interactions relative to the inertial evolution. We observed that the processes of diffusion and accumulation of structure are delicately balanced and for late times result in a net diffusion. The delicate balance is a consequence of the fact that our formalism allows to take the full non-linear coupling of free trajectories by initial momentum correlations into account and so far explicitly relies on the Gaussian form of the initial conditions. Including the contributions from interactions in the Born approximation, we were able to compute the time derivative of the full non-linear density power spectrum and compare the Born approximation with simulations. We observed that the naive intuition, coming from perturbative approaches to cosmic structure formation, that the relative error of the approximation should be smaller at high redshifts and large scales, is not valid here as our approach is non-perturbative, but approximative. At low redshift, the relative error is of order 30 % even on highly non-linear scales up to k ∼ 5 h Mpc −1 . We postpone a thorough analysis of the time evolution of the relative error to a future study. We observed a tendency of the net diffusion to approach the interaction contribution over time suggesting that diffusion and interaction are kinematically related to each other. The FDRs discussed in Section 4 show that the evolution of diffusion is indeed kinematically related to the reaction of the system to a gradient force. This result is independent of the form of the gravitational potential. Although KFT describes a system far from equilibrium, we find that kinematic FDRs hold and in fact a whole hierarchy of higher order FDRs follows from a time-reversal symmetry of the generating functional. The Gaussian form of the initial conditions seems to be crucial for the validity of the FDRs as well as the form (19) of the trajectories showing that the initial momentum is propagated in the same way as an inhomogeneity. Finally, we want to give a rather speculative outlook into future applications and relevance of the FDRs found in this work. It is yet unclear whether the FDRs are somehow related to the virial theorem, both being derived from kinematic arguments and relating the interactions and inertial movement in a system. The FDRs might, thus, be used to study virialization and the stable clustering regime. In fact our results in Fig. 6 indicate that the relative sum of net diffusion and interactions drops to zero on very small scales. If this remains true when taking post-Born approximations into account, which is necessary on these scales, this would imply that an equilibrium between clustering and diffusion is reached on very small scales. Furthermore, since the Gaussian form of the initial conditions plays an important role for the validity of the FDRs and the cancellation of the one-and two-particle contributions in Fig. 1, FDRs might be used in future studies to examine how non-Gaussianities effect the formation of a stable-clustering regime. where c j (t) remains to be determined. In order to prove invariance of the generating functional we first have to check how the MSR-action, determining the dynamics of the system, changes under the transformation. After we have shown how the dynamics of the system transform, we use these dynamics to prove invariance of the generating functional and determine the appropriate form of the remaining free parameter c j (t) of the transformation. × exp i dt J q j (t) q (i) j + g qp (t, 0) p (i) j − dt ′ G (adv) qp (t, t ′ ) K p j (t ′ ) = dΓ exp −i dt ′ K p j (t ′ ) · dt J q j (t)g qp (t, 0) exp i dt K p j (t) · c j (t) where we executed the derivative with respect to the initial momenta in the last step. Careful comparison with the non-transformed generating functional (A.15) shows that c j has to be a functional of the auxiliary field J and has the form: This finally proves the time-reversal symmetry (67) together with c j in the form (68). where we used that p k (t 1 ) = p (i) k since no response fields are applied. In a final step we evaluate the integrals in the initial momenta: k 1 C −1 p j p k p k (t 1 )ρ j (1) . . . = V −N dq (i) k 1 C −1 p j p k ∂ i∂ L p k e − 1 2 L p l C p l pm L pm e i L qr q (i) r = iV −N dq (i) k 1 C −1 p j p k C p k p s L p s e − 1 2 L p l C p l pm L pm e i L qr q (i) where we used C −1 p j p k C p k p s = δ js ½ 3 .
11,235
2017-07-04T00:00:00.000
[ "Physics" ]
Development of a vesicular stomatitis virus pseudotyped with herpes B virus glycoproteins and its application in a neutralizing antibody detection assay ABSTRACT Herpes B virus (BV) is a zoonotic virus and belongs to the genus Simplexvius, the same genus as human herpes simplex virus (HSV). BV typically establishes asymptomatic infection in its natural hosts, macaque monkeys. However, in humans, BV infection causes serious neurological diseases and death. As such, BV research can only be conducted in a high containment level facility (i.e., biosafety level [BSL] 4), and the mechanisms of BV entry have not been fully elucidated. In this study, we generated a pseudotyped vesicular stomatitis virus (VSV) expressing BV glycoproteins using G-complemented VSV∆G system, which we named VSV/BVpv. We found that four BV glycoproteins (i.e., gB, gD, gH, and gL) were required for the production of a high-titer VSV/BVpv. Moreover, VSV/BVpv cell entry was dependent on the binding of gD to its cellular receptor nectin-1. Pretreatment of Vero cells with endosomal acidification inhibitors did not affect the VSV/BVpv infection. The result indicated that VSV/BVpv entry occurred by direct fusion with the plasma membrane of Vero cells and suggested that the entry pathway was similar to that of native HSV. Furthermore, we developed a VSV/BVpv-based chemiluminescence reduction neutralization test (CRNT), which detected the neutralization antibodies against BV in macaque plasma samples with high sensitivity and specificity. Crucially, the VSV/BVpv generated in this study can be used under BSL-2 condition to study the initial entry process through gD–nectin-1 interaction and the direct fusion of BV with the plasma membrane of Vero cells. IMPORTANCE Herpes B virus (BV) is a highly pathogenic zoonotic virus against humans. BV belongs to the genus Simplexvius, the same genus as human herpes simplex virus (HSV). By contrast to HSV, cell entry mechanisms of BV are not fully understood. The research procedures to manipulate infectious BV should be conducted in biosafety level (BSL)-4 facilities. As pseudotyped viruses provide a safe viral entry model because of their inability to produce infectious progeny virus, we tried to generate a pseudotyped vesicular stomatitis virus bearing BV glycoproteins (VSV/BVpv) by modification of expression constructs of BV glycoproteins, and successfully obtained VSV/BVpv with a high titer. This study has provided novel information for constructing VSV/BVpv and its usefulness to study BV infection. genus (1).Macaque monkeys, including rhesus macaque (Macaca mulatta), cynomolgus macaque (Macaca fascicularis), and Japanese macaque (Macaca fuscata), are the natural hosts of BV.Macaque monkeys are also used as laboratory animals in biomedical research to study human disease.In macaque monkeys, BV infection is usually asympto matic or causes very mild disease, whereby the virus establishes a latent infection in the sensory nerve ganglia and can be reactivated; this process is similar to that of HSV infection in humans (2,3).However, in a non-natural host species, BV infection is highly pathogenic.In humans, BV infection is severe, resulting in permanent neurological deficit or death.Without timely and adequate intervention measures, case fatality rates can reach 80% (4).Human BV infections are rare, with only 50-60 cases documented globally since the first reported case of human BV infection in 1933 (5).In 2019, two human BV infection cases were identified for the first time in Japan; the presence of infection was confirmed by detecting BV genome sequences in the cerebrospinal fluid (CSF) of patients (6). Herpesviruses are large, enveloped, double-stranded DNA viruses.HSV encodes over 84 viral proteins; at least 12 of these are envelope glycoproteins, some of which are involved in viral entry (7).Unlike many other enveloped viruses, which only require one or two viral glycoproteins to mediate receptor binding and fusion, HSV requires four viral glycoproteins (gB, gD, gH, and gL) for cell entry (8)(9)(10)(11).In the general model of herpesvirus entry into the cell, the receptor-binding protein gD first binds to its cell surface receptor(s).This interaction activates a heterodimeric complex composed of gH and gL (gH/gL), which induces a conformational change in the viral fusion protein gB (10,12,13).gB, gH, and gL, which comprise the core fusion machinery in the herpesvirus family, are conserved entry glycoproteins.However, the receptor-binding glycoproteins are diverse and not conserved among the herpesvirus subfamilies (13)(14)(15).Nectin-1, also called herpesvirus entry mediator C (HevC), is a cell-cell adhesion molecule and a major cell surface receptor for gD of BV.Meanwhile, four receptors have been identified for gD of HSV-1: nectin-1, nectin-2, herpesvirus entry mediator (HVEM), and modified heparan sulfate (16).The amino acid sequence identity of gB between BV and HSV-1/-2 is approximately 80%, whereas gD and gH/gL are less conserved, ranging from 53% to 66% (17).Although not essential for viral entry, gC is also involved in the attachment of HSV-1 to heparan sulfate proteoglycans on the cell surface (18,19).By contrast to that of HSV-1, the entry mechanism of BV has not been fully elucidated. Recently, the production of vesicular stomatitis virus (VSV) pseudotyped with HSV-1 entry glycoproteins was reported (20).The study demonstrated that the four glycopro teins (gB, gD, gH, and gL) of HSV-1 were not only necessary but also sufficient for HSV-1 cell entry.However, the cell entry mechanism of this VSV-pseudotyped virus might not be the same as that of native HSV-1, because it infected neither Vero nor HeLa cells expressing the gD receptor nectin-1, both of which have been widely used in HSV-1 infection research (21). Because BV is highly pathogenic to humans, it should be studied in biosafety level (BSL)-4 facilities (22).By contrast, VSV-pseudotyped viruses can be handled at a standard BSL-2 facility.Taking advantage of its lower risk to humans, VSV pseudotyped with BV glycoproteins (VSV/BVpv) is a useful tool for the elucidation of the complex cell entry mechanism of BV.In this study, we successfully generated a novel VSV/BVpv, which infected Vero, a cell line highly susceptible to BV infection (23,24).By modifying the signal peptide domain of BV glycoproteins using an expression vector, which permitted the cell surface expression of BV glycoproteins, we obtained a high-titer VSV/BVpv preparation.Using this VSV/BVpv, we then developed a pseudotype-based neutralizing antibody assay to detect anti-BV antibodies in macaque plasma. Macaque plasma samples Plasma samples from 88 macaques (78 from rhesus and 10 from Japanese macaques; 44 females and 44 males), which were captured in Chiba Prefecture in 2019, were used in this study.Of these, plasma sample no.91 exhibited a high reactivity against BV gH in the immunofluorescence assay (IFA) and was therefore used as a positive BV control antibody in Western blotting. Plasmids used for the generation of pseudotyped viruses Genetic sequences encoding the BV glycoproteins (gB, gD, gH, gL, and gC) were amplified from total DNA extracted from the CSF of BV-infected patients (6) by PCR using the KOD FX Neo polymerase (TOYOBO, Osaka, Japan), and their sequences were subsequently determined.Human-codon-optimized DNA sequences encoding the BV gH, gL, and gC glycoproteins were then synthesized (Integrated DNA Technologies, Coralville, IA, USA) to improve the efficiency of cloning into the expression vector.DNA sequences corresponding to the 28-, 24-, 37-, 24-, and 31-amino-acid (signal peptide region) truncations at the N-terminus for BV gB, gD, gH, gL, and gC, respectively (Fig. S1), were subcloned into the pDisplay mammalian expression vector (Thermo Fisher Scientific) at the Sfi I and Sal I sites using the In-Fusion HD cloning kit (Takara Bio, Shiga, Japan).In these constructs, the sequences corresponding to the vector's signal peptide (Ig κ-chain leader), which precedes the HA-tag, were connected to each N-terminus-truncated glycoprotein.The DNA sequences of these constructs were verified by Sanger sequencing. Production of VSV pseudoviruses Pseudotype VSVs bearing BV glycoproteins were generated as follows.Briefly, 293T cells (approximately 1 × 10 5 cells/well of a 12-well plate) were grown on collagen-Icoated tissue culture plates (Corning, Corning, NY, USA) and were co-transfected with a combination of expression plasmids encoding each BV glycoprotein (200 ng each/well of a 12-well plate) using the TransIT-LT1 transfection reagent (Mirus Bio LLC, Madison, WI, USA).At 48 h after transfection, the cells were washed with phosphatebuffered saline (PBS) once and inoculated with G-complemented (*G) VSV∆G-encoding luciferase as reporter gene (designated here as VSVpv) at a multiplicity of infection (MOI) of 0.2 (26).After 1.5 h of incubation at 37°C, the cells were washed with PBS three times and were resuspended in 5% FBS-DMEM.After 24 h of incubation, the supernatants (containing VSV pseudoviruses) were centrifuged to remove cell debris and were stored at −80°C until use.VSV pseudoviruses expressing gB, gD, gH, and gL were called VSV∆G-BDHL; those expressing gB, gH, and gL were called VSV∆G-BHL; and those expressing gB, gD, gH, gL, and gC were called VSV∆G-BDHLC.Mock virus was prepared from 293T cells transfected with the empty pDisplay vector (800 or 1,000 ng/well of a 12-well plate).An equal volume (5 µL) of VSV pseudoviruses produced in parallel was used as an inoculum in the infection experiments to assess the role of gD or gC on infectivity.Briefly, Vero cells (approximately 5 × 10 4 cells) seeded in 96-well plates were infected with 5 µL of VSV pseudoviruses.At 24 h after infection, the infectivity of the pseudoviruses was assessed by measuring the luciferase activity using the Bright-Glo luciferase assay system (Promega, Madison, WI, USA) and GloMax Discover microplate reader (Promega); infectivity was expressed in relative light units (RLUs).To confirm the incorporation of BV glycoproteins into VSV particles, VSVpv or VSV∆G-BDHL (designated as VSV/BVpv) were purified by ultracentrifugation in 25% sucrose solution.The viral proteins were then subjected to SDS-PAGE and Western blotting (see later section). Production of polyclonal antibodies against BV glycoproteins To obtain antisera against the BV glycoproteins, recombinant proteins were produced using a baculovirus expression system.Briefly, the coding sequences for the soluble form of gD (sgD 1-341 ) and the full length of gL (due to the absence of the transmembrane region) were tagged with the 8× His-tag sequence at the 3′ end and were cloned into the baculovirus transfer vector pAcYM1 via the BamHI site (27).The transmem brane topology of the glycoproteins was predicted using TMHMM Server v2.0 (http:// www.cbs.dtu.dk/services/TMHMM/). Recombinant baculoviruses were then generated by co-transfection of each transfer vector and the BestBac linearized baculovirus DNA (Expression Systems) into the Sf-9 cells using the FuGENE HD transfection reagent (Promega).Recombinant baculoviruses expressing sgD 1-341 and gL were labeled as AcYM1-BV-sgD 1-341 and AcYM1-BV-gL, respectively.Recombinant BV-sgD 1-341 was prepared from Tn5 cells infected with AcYM1-BV-sgD 1-341 .The culture supernatant of Tn5 cells inoculated with AcYM1-BV-sgD 1-341 was dialyzed with PBS and was purified on a column packed with Ni-NTA agarose (QIAGEN, Venlo, The Netherlands).The recombi nant BV-gL was purified from the lysates of Tn5 cells inoculated with AcYM1-BV-gL, also by Ni-NTA agarose column purification.Polyclonal antibodies against BV-gD and BV-gL were generated in BALB/c mice by immunizing the animals with purified recombinant proteins (10 µg injected four times), which were administered alongside the TiterMAX Gold adjuvant (Merck). Cloning and expression of recombinant nectin-1 The nectin-1-coding gene (HVEC, GenBank: AF060231) was amplified by RT-PCR from RNA extracted from HeLa cells and cloned into the pKS336 mammalian expression vector (28). Treatment of Vero cells with endosomal acidification inhibitors To examine the effect of treatment of endosomal acidification inhibitors on VSV/BVpv entry, Vero cells were pretreated with bafilomycin A1 (Merck), ammonium chloride (Fujifilm Wako Chemicals), and chloroquine diphosphate salt (Merck).Bafilomycin A1 was prepared in dimethyl sulfoxide (DMSO) to make a stock solution (20 µM).Ammonium chloride and chloroquine were diluted in deionized distilled water to make 1.0 M and 20 mM stock solutions, respectively.The Vero cells were pretreated with increasing concentrations of these inhibitors for 1 h at 37°C, and then the cells were infected with VSVpv or VSV/BVpv (equivalent to ca. 2-5 × 10 5 RLU/well).Infectivity of pseudoviruses was determined as described in "Production of VSV pseudoviruses." Detection of BV neutralizing antibodies in macaque plasma samples For the pseudotype-based chemiluminescence reduction neutralization test (CRNT), heat-inactivated (56°C, 30 min) macaque plasma samples or medium only (no-plasma control) were diluted with 2% FBS-DMEM and were mixed with an equal volume of VSV/ BVpv (equivalent to ca. 1 × 10 6 RLU/well); the final dilution was 1:20 (plasma to medium).After a 1-h incubation at 37°C, the mixture was inoculated into Vero cells seeded on 96-well plates (Porvair Sciences, King's Lynn, Norfolk, UK); the luciferase activity was measured 24 h later.Neutralizing antibody activity was measured as a decrease in chemiluminescence relative to the no-plasma control (relative percent infectivity).The plaque reduction neutralization test (PRNT) was performed using HSV-1 strain F. Serially diluted (fourfold, starting from 1:5 dilution) macaque plasma samples (100 µL per sample) were mixed with an equal volume of HSV-1 (100 PFU/100 µL) and were incubated for 1 h at 37°C.Then, 100 µL of the mixture was inoculated into duplicate wells of Vero cells seeded in 24-well culture plates.After a 1-h incubation, the inoculum was removed and replaced with fresh medium containing γ-globulin (vol/vol, 100:2) (Merck).After 2 d, the cells were fixed with 10% formalin and were stained with crystal violet solution.Finally, the number of plaques was counted.The PRNT (i.e., the HSV-1 NT) titer was calculated based on the reciprocal of the plasma dilution needed to achieve a ≥50% reduction in plaque count (PRNT 50 ). Immunofluorescence assay Each of the BV glycoprotein expression vectors or the empty vector (250 ng/well of an 8-well chamber slide) was transfected into 293T cells seeded on collagen-coated plates.After a 48-or 72-h incubation, the transfected cells were fixed with 10% formalin in PBS.The cells were treated with a primary anti-HA antibody for 1 h at 37°C and then with a secondary Alexa-Fluor-488-conjugated anti-mouse IgG for 1 h at 37°C.The cells were examined on Vectashield mounting medium containing 4′,6-diamidino-2-phenylin dole (DAPI) (Vector Laboratories, Newark, CA, USA) under a fluorescence microscope (BZ-X800; Keyence).To verify the detection of nectin-1, CHO-K1 cells, which are naturally nectin-1 deficient (29), were transfected with a plasmid encoding nectin-1 or an empty vector.Expression of nectin-1 was confirmed by IFA using an anti-nectin-1 primary antibody and a secondary Alexa-Fluor-488-conjugated anti-mouse IgG. Western blotting To detect the viral proteins within the VSV particles, the purified viral particles were subjected to SDS-PAGE and Western blotting.Briefly, 293T cells, transfected with expression plasmids encoding the BV glycoproteins, were lysed in PBS containing 1% NP40.The lysates were then mixed with SDS sample buffer solution (Fujifilm Wako Chemicals) and were boiled to denature the proteins.The proteins were fractioned by SDS-PAGE on 12.5% or 5%-20% e-PAGELs (ATTO, Tokyo, Japan).The protein bands were stained using Bio-Safe Coomassie Premixed Staining Solution (Bio-Rad, Hercules, CA, USA) or were transferred to a polyvinylidene fluoride (PVDF) membrane (Merck).The membranes were blocked with PBS containing 0.05% Tween-20 (PBST) and 5% skim milk, and were washed with PBST before being stained with primary antibodies against viral proteins (i.e., VSV G and M proteins; HSV-1 and HSV-2 gB; and BV gD, gL, and HA), the HA-tag, or macaque plasma (BV-antibody-positive control).Finally, an HRP-conjugated secondary antibody was applied to the membranes, and the bands were visualized using the chemiluminescent detection reagent (Cytiva) and ImageQuant LAS 500 chemilumi nescent imager (GE HealthCare). Statistical analysis Data analysis and visualization were performed using GraphPad Prism 8 (GraphPad software, San Diego, CA, USA).An unpaired t-test with Welch's correction was used to determine statistically significant differences between groups.The diagnostic efficiency of the CRNT was determined initially by constructing receiver operating characteristic (ROC) and two-graph ROC curves using StatFlex ver.7 (Artech, Osaka, Japan).The cutoff value (COV) was determined using the Youden index.The relationship between the HSV-1-based PRNT 50 and the VSV/BVpv-based CRNT results (relative percent infectivity) was evaluated using Spearman's rank correlation coefficient.The threshold for statistical significance was defined as a P value < 0.05. Expression of BV glycoproteins The four glycoproteins (gB, gD, gH, and gL) of HSV-1 play essential roles in viral entry (13).In addition to these glycoproteins, gC increases the infectivity of VSV pseudo types harboring HSV-1 glycoproteins (21).Here, we constructed mammalian expression plasmids encoding each of the five BV glycoproteins (gB, gD, gH, gL, and gC).These glycoproteins contained the N-terminus cell surface targeting signal instead of the native signal peptide sequences, which was followed by the HA epitope tag.The expression plasmids were transfected into 293T cells, and protein expression was examined by IFA using an anti-HA-tag antibody (Fig. 1A).The expression of gB, gD, gC, and gH was clearly observed, and they were localized on the cell surface and within the cytoplasm.In contrast, only a faint staining of gL was observed (Fig. 1A).However, when gH and gL were co-transfected, the cell surface and cytoplasm expression of the gH/gL complex was clearly visible.In addition, the molecular sizes of gH and gL observed in the co-transfected cells were higher than those detected in cells expressing either protein alone (Fig. 1B).This result is consistent with a previous report demonstrating that the HSV-1 gL acts as a chaperone for gH and is involved in the transport of gH to the cell membrane (30,31).Western blotting confirmed that the expressed gB, gD, and gC had the desired molecular weights (Fig. 1B). Generation of VSV pseudotyped with BV glycoproteins To generate VSV pseudotyped with BV glycoproteins, various combinations of the expression plasmids encoding the BV glycoproteins described above were co-transfected into 293T cells.Upon inoculation of the seed virus encoding luciferase as reporter gene (VSVpv) into 293T cells, VSV pseudoviruses were collected from the cell culture superna tants and their infectivities were measured using the luciferase assay.We did not determine the viral titer quantitatively nor measure the viral protein amount of pseudo viruses, which is often required to standardize the inoculum (32).Instead, we performed the infection experiments using equal volumes of the supernatants containing pseudovi ruses that were prepared in parallel to overcome this limitation.The VSV∆G/Luc-BDHL, which contained gB, gD, gH, and gL, exhibited the highest level of luciferase activity.Meanwhile, VSV∆G/Luc-BHL, which contained gB, gH, and gL, but not gD, had a relatively low luciferase activity, which was almost the same as that of the negative control VSV pseudovirus (Fig. 2).However, VSV∆G/Luc-BDHLC, which expressed gB, gD, gH, gL, and gC, exhibited lower infectivity rates than VSV∆G/Luc-BDHL.Similar results were obtained when the VSV seed pseudovirus expressing a GFP reporter (VSV∆G/GFP-*G) was used to produce VSV pseudoviruses (Fig. S2).Collectively, these results showed that BV infectivity was highest when the four glycoproteins gB, gD, gH, and gL were used to generate the VSV pseudovirus (i.e., VSV∆G/Luc-BDHL).The infectivity level of VSV∆G/Luc-BDHL was two-log higher than the background level (Fig. 2) and was similar to that of VSV pseudo viruses bearing viral glycoproteins reported in the previous studies (33,34).It was noted that gB expression was highest among glycoproteins (Fig. 1B).No additional examina tions were performed to assess whether the altered expression of gB (or other glycopro teins) affects VSV pseudovirus infectivity, because the VSV∆G/Luc-BDHL infectivity seemed to be sufficient for further experiments to examine viral entry as well as to measure neutralizing antibody.Therefore, we decided to use this VSV pseudovirus bearing BV glycoproteins BDHL expressing luciferase (designated as VSV/BVpv) in subsequent experiments. Incorporation of BV glycoproteins into VSV virions To confirm the successful incorporation of BV glycoproteins into VSV particles, VSV/BVpv and its seed virus VSVpv were purified by ultracentrifugation and were subjected to SDS-PAGE.Protein bands corresponding to the VSV proteins L (241 kDa), N (47 kDa), and M (27 kDa) were detected in both the VSV/BVpv and the VSVpv virions (Fig. 3A).Protein bands corresponding to the VSV protein G (63 kDa) and the BV gB (ca.120 kDa) were only detected in the VSVpv or VSV/BVpv preparations, respectively.The four BV glycoproteins (gB, gD, gH, and gL) were detected in the VSV/BVpv virions by Western blotting (Fig. 3B).These results indicate that BV glycoproteins were successfully incorporated into the VSV pseudovirus particle. Infection of cells with VSV/BVpv The infectivity of VSV/BVpv was examined by using various cell lines.Huh7 cells were highly susceptible to VSV/BVpv infection, followed by Vero and BHK-21 cells.However, CHO-K1 cells showed no susceptibility to the VSV/BVpv infection (Fig. 4A). BV cell entry is primarily mediated by the interaction between gD and its receptors nectin-1 or nectin-2 on the cell surface.Of note, nectin-1 is more effective than nectin-2 at mediating BV cell entry (35).To investigate whether the infection of VSV pseudovi rus generated in this study was dependent on gD, VSV/BVpv was inoculated into the naturally nectin-1-expressing Vero cells (21) in the presence of increasing concentrations of anti-gD or negative control antibodies.The infectivity of VSV/BVpv was blocked by the anti-gD antibody but not by the negative control antibody in a dose-dependent manner (Fig. 4B).Next, CHO-K1 cells, which do not naturally express nectin-1 and are therefore not BV permissive (29), were used to examine the nectin-1 dependency of VSV/BVpv.When CHO-K1 cells transfected with empty vector were inoculated with VSV/BVpv ("CHO empty" in Fig. 4C), only the background level of luciferase activity was observed (Fig. 4D), which was comparable to the luciferase activity of CHO-K1 cells inoculated with the gDdeficient pseudovirus ("BHL" in Fig. 4D).By contrast, when VSV/BVpv was inoculated into CHO-K1 cells expressing nectin-1 ("CHO nectin-1" in Fig. 4C), the luciferase activity was 8.9-fold higher than that of the CHO empty inoculated with the VSV/BVpv (Fig. 4D).These results indicate that VSV/BVpv infection was dependent on the presence of gD and nectin-1.We next investigated the infectivity of VSV/BVpv in human neuroblastomaderived cells.When IMR-32 cells were infected with VSV∆G/Luc-BDHL (BDHL; expressing gB, gD, gH, and gL), a high level of luciferase activity (approximately 10 5 RLU/μL) was observed, while VSV∆G/Luc-BHL (BHL; lacking gD) showed a little infectivity (Fig. 4E).Consistent with the experiments using Vero cells, the enhancement of the infectivity by adding gC expression in preparation of VSV/BVpv was not observed (VSV∆G/Luc-BDHLC, Fig. 4E), and the infection of VSV/BVpv to IMR-32 cells was inhibited by anti-gD antibody (Fig. 4F). Effect of endosomal acidification inhibitors on the cell entry of VSV/BVpv It has been shown that an exposure of endosomal acidification inhibitors on Vero cells does not inhibit HSV-1 infection, indicating that HSV-1 can fuse directly with the plasma membrane when infected to Vero cells, but not internalized through low-pH-dependent endocytic pathway (36).To examine the entry pathway of the VSV/BVpv in Vero cells, Vero cells were pretreated with endosomal acidification inhibitors, such as bafilomycin A1, ammonium chloride, or chloroquine, and then the cells were inoculated with VSV pseudoviruses.As shown in Figure 5, infection of seed virus VSVpv to Vero cells was inhibited by treatment with all reagents tested in a dose-dependent manner, consis tent with the fact that VSVpv utilize endocytosis and a low-pH-dependent fusion (37).In contrast, endosomal acidification inhibitors did not inhibit VSV/BVpv entry.These results indicated that VSV/BVpv enters to Vero cells via the pH-independent direct fusion pathway, as shown in the entry study of HSV-1 (36). Detection of BV neutralizing antibodies using macaque samples We next tested the performance of the generated VSV/BVpv in a CRNT to detect neutralizing antibodies against BV in macaque plasma.The detection accuracy of the CRNT was compared against that of the PRNT, which uses HSV-1.Because of (E) VSV∆G/Luc-BDHL, VSV∆G/Luc-BHL, and VSV∆G/Luc-BDHLC, which expressed gB, gD, gH, gL, and gC, were infected to IMR-32 cells, and the luciferase activity was measured at 24 h post-infection.(F) VSV∆G/Luc-BDHL was preincubated with diluted anti-gD mouse serum or control serum and then inoculated into IMR-32 cells.The luciferase activity was determined as described in panel B. Data are represented as the mean of three experiments with standard deviation (SD). DISCUSSION We successfully generated VSV/BVpv bearing the four BV glycoproteins gB, gD, gH, and gL, as well as a luciferase reporter.VSV/BVpv efficiently infected the Vero cell line, which has been widely used to study BV cell entry.The properties of VSV/BV/Luc were similar to those of infectious BV with regard to gD and its cellular receptor nectin-1 being required for cell entry.This was evidenced by the fact that VSV/BVpv infection of BV-permissive Vero cells was inhibited in the presence of an anti-gD antibody, while VSV/BVpv infection of the BV-nonpermissive CHO-K1 cells was increased on expression of nectin-1.We showed that VSV/BVpv could infect not only Vero cells and nectin-1-expressing CHO-K1 cells, but also various other cell lines, such as Huh7 (human hepatoma), BHK-21 (baby hamster kidney), and IMR-32 (human neuroblastoma) cells.The requirement of gD during the infection of IMR-32 cells with VSV/BVpv was also confirmed, indicating that the infection of human neural cells with BV was gD dependent (Fig. 4).A particular merit of this study was that we were able to generate VSV/BVpv with a high degree of infectivity (>10 5 RLU/μL) upon inoculation of Vero cells, which could be used directly in neutralization assays.By contrast, VSV pseudotype viruses bearing other viral envelope proteins sometimes require additional concentration steps (e.g., via ultracentrifugation), which are laborious and time-consuming (20). The pCAGGS mammalian expression vector has been previously used for the expression of viral glycoproteins to generate many VSV pseudotype viruses (34,42,43).However, when we generated VSV/BVpv in cells transfected with plasmids constructed based on the pCAGGS vector, and inoculated it into Vero cells, minimal luciferase activity was observed (data not shown).By contrast, when we used the pDisplay mammalian expression vector to express BV glycoproteins, in which a native signal sequence of each glycoprotein was replaced with Ig κ-chain leader sequence, to generate VSV/BVpv, a much higher level of Vero cell infectivity was achieved.A high-level infectivity of VSV pseudovirus bearing HSV-1 glycoproteins on Vero cells was also observed when pDisplay vector was used for the expression of HSV-1 glycoproteins (gB, gD, gH, and gL) (Fig. S4).The discrepancy between these results obtained from the use of differ ent expression vectors might be related to the different properties of VSV and BV glycoproteins implicated in the viral maturation process in transfected cells.The VSV virion is coated with an envelope G glycoprotein.After synthesis and glycosylation in the Golgi apparatus, the G protein is transported to the plasma membrane, where it is incorporated into the VSV virion during the budding process (44).Herpesviruses, however, use a different mechanism, whereby the outer envelope, which is studded with viral glycoproteins, is acquired in the Golgi or the trans-Golgi network; these enveloped virions are then transported to the plasma membrane in vesicles (45,46).Therefore, expressing BV glycoproteins with native viral signal peptides may have prevented them from being transported to the plasma membrane for efficient incorporation into 21) have shown the enhancement by gC in a VSV system bearing HSV-1 glycoproteins.It should be noted that the enhancement is restricted in the specific cell types (CHO-HVEM and HaCaT cells) (21).In contrast, no enhancement by gC was observed for VSV pseudovirus bearing BV glycoproteins infected to IMR-32 cells (Fig. 4E), nor for VSV pseudovirus bearing HSV-1 glycoproteins infected to Vero cells (Fig. S4).Furthermore, a reduction of VSV pseudoviruses bearing BV glycoproteins by gC was observed when using the Vero cells (Fig. 2).This discrepancy probably comes from different cell types used.Consistent with these findings, it has been shown that, by using gCdeficient recombinant HSV-1, gC supports an HSV-1 entry to CHO-HVEM and HEKa cells during endosomal low-pH pathway but not to Vero and IMR-32 cells (47).We suggested that BV entry process varied by cell types with regard to the requirement of gC. Comparison of the relative infectivity of VSV/BVpv in the CRNT with that of HSV-1 in the PRNT showed that the CRNT was more sensitive than the PRNT.Using the CRNT, four HSV-1-negative samples were reclassified as being BV-positive (relative infectivity range: 10.4%-14.6%)(Fig. 6A). There are limitations in this study.First, we were not able to perform any experiments with the infectious BV because BV is not available for research purposes at present in Japan.Instead, we used the gold standard HSV-1 PRNT as a surrogate BV neutralization assay to determine the presence of anti-BV antibodies in macaque plasma samples.A previous study reported that neutralizing antibodies in monkey sera generally neutralize both BV and HSV-1 (38).However, the higher sensitivity of CRNT (using VSV/BVpv) as compared with that of PRNT (using HSV-1) may be explained by the fact that neutralizing antibodies against BV in macaque plasma may not fully react to HSV-1 neutralizing epitopes.In general, reporter-based assays are simpler, faster, and more sensitive than traditional serological assays.Thus, the results of this study suggest that the CRNT using VSV/BVpv is a useful tool for determining the presence of neutralizing antibodies against BV in plasma samples. Another limitation of this study is that VSV pseudoviruses do not reflect the native lipid/glycoprotein composition of BV particles.Therefore, VSV pseudoviruses do not necessarily imply the same infection properties and/or antigenicity of native BV.However, like a native BV, entry of VSV/BVpv was dependent on gD and nectin-1 (Fig. 4).In addition, there was no inhibitory effects of VSV/BVpv infection on Vero cells treated with endosomal acidification inhibitors (Fig. 5).This entry process was consistent with infection of native HSV-1 to Vero cells (36).Therefore, VSV/BVpv would be useful to understand an initial entry process through gD-nectin-1 interaction and a direct fusion of BV glycoproteins with the plasma membrane of Vero cells. In conclusion, we have developed a VSV pseudovirus bearing the four essential glycoproteins of BV.We produced high titers of VSV/BVpv without performing additional concentration steps and showed that it infected a variety of cell types using the same cell entry route as BV.Furthermore, we developed a VSV-pseudotype-based CRNT system to detect the presence of anti-BV neutralizing antibodies in plasma samples.We demonstra ted that the CRNT was simple, rapid, highly sensitive, safe, and did not require a BSL-4 facility.However, due to the cross-antigenicity between BV and HSV-1, the CRNT may not be effective at specifically diagnosing BV infection in humans; thus, the test must be further improved to differentiate between the anti-BV and anti-HSV-1 antibody response.Meanwhile, the VSV/BVpv pseudotype system might be useful in investigating anti-BV antibody in monkey plasma samples. FIG 1 FIG 1 Expression of B virus glycoproteins.(A) 293T cells were transfected with plasmids encoding BV glycoproteins.After a 48-to 72-h incubation, the cells were fixed with 10% formalin and were stained with an anti-HA monoclonal antibody detected with a secondary Alexa-Fluor-488-conjugated antibody.(B) The plasmid-transfected cells were also analyzed by Western blotting using an anti-HA monoclonal antibody and anti-BV-antibody-positive macaque plasma.Molecular weight marker labels (kDa) are shown on the left, and the protein of interest in each lane is indicated by an arrow. FIG 2 FIG 2 Infectivity of VSV pseudoviruses.The indicated combinations of plasmids expressing BV glycoproteins were co-transfected into 293T cells.After infecting 293T cells with VSV∆G/Luc-*G, VSV pseudoviruses expressing gB, gD, gH, and gL (VSV∆G/Luc-BDHL); gB, gH, and gL (VSV∆G/Luc-BHL); or gB, gD, gH, gL, and gC (VSV∆G/Luc-BDHLC) were produced.Mock virus was prepared from 293T cells transfected with the empty vector.Each pseudovirus was used to infect Vero cells seeded into 96-well plates.After a 24-h incubation, the luciferase activity of each well was measured using a luminometer.Data are presented as the mean of three experiments with standard deviation (SD).Significance was conducted using a two-tailed Student's t-test with Welch's correction.**P < 0.01. FIG 3 FIG 3 Incorporation of BV glycoproteins into VSV pseudotype particles.Pseudotyped VSV∆G/Luc bearing the BV glycoproteins gB, gD, gH, and gL (VSV/BVpv), and VSV∆G/Luc-*G (VSVpv) were partially purified by ultracentrifugation in 25% sucrose medium, separated by SDS-PAGE, and analyzed by Coomassie blue staining (A) or by Western blotting (B) using the indicated antibodies.Monkey, BV-positive macaque plasma.Molecular weight marker labels (kDa) are shown on the left, and the proteins of interest in each lane are indicated by an arrow or a label. FIG 4 FIG 4 Entry of VSV/BVpv into Vero cells is dependent on gD and its receptor nectin-1.(A) The VSV/BVpv bearing the four entry essential glycoproteins gB, gD, gH, and gL was used to infect various cell lines.Briefly, 50 µL of the 10-fold-diluted VSV/BVpv was used to infect the Vero, Huh7, BHK-21, and CHO-K1 cell lines seeded into 96-well plates.After a 24-h incubation, the luciferase activity was measured.(B) The VSV/BVpv was preincubated with serially diluted anti-gD mouse serum or control serum.The mixture was then inoculated into Vero cells.The relative infectivity of VSV/BVpv mixed with the serum samples versus that of the no-serum control is shown.(C) CHO-K1 cells were transfected with a nectin-1 expression plasmid or an empty vector.The expression of nectin-1 on CHO-K1 cells detected by an anti-nectin-1 monoclonal antibody is shown (top images: fluorescence field; bottom images: bright field).(D) CHO-K1 cells, transfected with a nectin-1 expression plasmid or an empty vector, were subsequently infected with VSV pseudovirus bearing the four entry essential glycoproteins gB, gD, gH, and gL (VSV∆G/Luc-BDHL), or that lacking gD (VSV∆G/Luc-BHL).The infectivity of each pseudovirus determined by measuring luciferase activity is shown. FIG 5 FIG 5 Entry of VSV/BVpv into Vero cells is not affected by inhibitors of endosomal acidification.Vero cells were pretreated with increasing concentrations of bafilomycinA1 (A), ammonium chloride (B), or chloroquine (C) for 1 h, and then VSV/BVpv or VSVpv was added to the cells in the presence of inhibitors.After a 24-h incubation, the infectivity was determined by measuring luciferase activity.Data are presented as the mean of three experiments with standard deviation (SD). FIG 6 FIG 6 Neutralization assays with macaque plasma samples.(A) Correlation of neutralization assay results of HSV-1-based PRNT with the relative infectivity determined using VSV/BVpv-based CRNT.The HSV-1-based PRNT and VSV/BVpv-based CRNT were performed using plasma samples (n = 88) obtained from macaques.The COV of the relative infectivity of VSV/BVpv is shown as a dotted line.The significance (P < 0.0001) of the difference between the positive and negative samples of HSV-1-based PRNT is also shown.Bars represent the mean of relative infectivity of VSV/BVpv of each group.(B) Comparison of relative infectivity determined using the VSV/BVpv-based CRNT and the neutralizing antibody titer determined using the HSV-1-based 50% PRNT (PRNT 50 ) (n = 40).The Spearman's rank correlation coefficient was −0.4154 (95% confidence interval: −0.6492 to −0.1099, P = 0.0077). TABLE 1 Comparing the performance of the conventional and new neutralization tests using macaque plasma samples a VSV pseudotype particles.However, further research is needed to confirm this theory, for example, by conducting a detailed analysis of the intracellular distribution and/or transport kinetics of BV glycoproteins connected to naïve or heterologous leader sequences.Hilterbrand et al. ( the
7,816.2
2024-06-07T00:00:00.000
[ "Medicine", "Biology" ]
Strategic Reason for Employing Workers with Public Service Motivation We construct a simple game-theoretic model in which one private firm and one public (or state-owned) firm compete in quantity of goods produced or service provided. The private and public firms each decide how many workers with public service motivation they will employ as part of an incentive scheme. We assume that both firms produce homogeneous goods with a quadratic cost function but that the private firm is more efficient than the public firm. Both firms are faced with linear inverse demand. We show that whether public firms employ more workers with public service motivation than private firms depends on the efficiency gap between the public and private sectors. This result explains why some literature in public administration reports a significant difference in public service motivation between employees in the private and public sectors and the other literature does not. Introduction One of the important issues in the literature of public administration is whether employees of public (or state-owned) firms and those of private firms are different in terms of work-related values, reward preferences, needs, and personality types (Wittmer, 1991).Following a prominent study of public service motivation (Perry & Porter, 1982;Perry, 1996Perry, , 1997)), many papers study this issue 1 .However, overall, empirical studies on this issue do not have consistent results.That is, some studies show that employees of public and private firms are different, but other studies do not show this.For example, while Rainey (1982) reports a difference in preferences regarding the value put on helping others, Gabris and Simo (1995) show that there is no difference between workers in the public and private sectors. In order to explain why we cannot obtain robust evidence about the difference in public service motivation between workers in the public and private sectors, we provide a gametheoretic model.Analyzing this model, we show that the strategic incentive to employ workers with public service motivation and the disparity in efficiency between the public and private sectors accounts for the difference.That is, whether public firms employ more workers with public service motivation than private firms do depends on the efficiency gap between public and private firms. Our approach is relevant to the issue of delegation in microeconomic theory and game theory (Vickers, 1985;Fershtman & Judd, 1987).In the delegation model, the objective of the principal is not the same as that of his agent.Thus, the principal creates an incentive scheme for the agent.In our model, the owners and governors of the firms in the public and private sectors decide how many workers with public service motivation to employ.This decision works as a commitment of the incentive scheme.This point is in contrast to previous studies. This paper is organized as follows.In the next section, we present the model.The third section calculates the equilibrium.The final section offers a conclusion. The Model We provide a simple game-theoretic model.We assume a public service market (e.g., the education industry or security industry) where one private firm (firm P) and one state-owned firm (firm S) compete in quantity of goods produced or service provided2 .In each firm, there is an owner (or governor) as principal and a manager (or public servant) as agent.For the private firm, the owner wants to maximize its profit; for the stateowned firm, the governor wants to maximize social welfare3 .In order to maximize the owner's objective, the owner (or governor) designs the objective function of the manager (or servant) as the incentive scheme.For simplicity, following Vickers (1985), we assume that the manager's and servant's objective function (u P and u S ) designed by owner and governor are:   1 , where i x is the output of firm i, i  is the profit of firm i, and i  is the incentive scheme chosen by the owner (or governor) of firm i ( )4 ., P S  We can interpret i  as the degree to which a private or state-owned firm employs workers with public service motivation.The reason is explained as follows.If the firm employs many workers with public service motivation, the firm will tend to provide more public service.In other words, more workers with public service motivation means a larger value of   . Since the owner or governor can decide whom to hire, they indirectly choose the level of i  through their hiring policy. Given the incentive scheme, the manager chooses the output.We consider the quadratic cost of firms: for a private firm and 2 S x for a state-owned firm, where  denotes the difference in efficiency.Since in many industries, private firms are more efficient than state-owned firm, we assume 0 1    .We assume that the firms produce homogeneous goods and that they face linear inverse demand: , where z is price. Under these setting, the profits of firms are: , where the consumer surplus is 2 2 . Then, the maximization problems of owner, governor, manager, and servant are max respectively.The timing of the game is as follows.In the first stage, the owner (or governor) of firm   , i P S  chooses the incentive scheme i  .After the manager (or servant) of firm i observes the incentive scheme, they choose the output i x in the second stage.We solve the model using backward induction. Calculating Equilibrium In the second stage, the maximization problems of manager and servant are: The first-order conditions lead to the outcome in the second stage: In the first stage, the maximization problems of owner and governor are:   2 max 1 , Note that the governor maximizes social welfare SW.Putting the outcome in the second stage into these maximization problems and using the first-order condition, we obtain the outcome in the first stage: otherwise.27 32 This result is illustrated in Figure 1.In this figure, the private firm employs more workers with public service motivation if  is small, and fewer workers with public service motivation if  is large.When the private firm is more efficient,  is closer to zero.Hence, the equilibrium structure of employment depends on the difference in the firms' efficiency.Solving The intuition behind this proposition is explained as follows.In our model, large   works as a commitment device for large output.When the private firm is quite efficient (  is small), the private firm obtains high gain from this commitment because of large margin (price minus marginal cost).Hence, the private firm with small  tends to set a large value for P  .This means that the private firm employs many workers with public service motivation.On the other hand, since the state-owned firm maximizes social welfare, it is concerned about efficient production.That is, if the private firm is quite efficient, the state-owned firm wants to move production from the state-owned firm to the private firm.For this to happen, the state-owned firm must commit to a small output.In other words, the state-owned firm chooses small S  .Hence, in this case, the state-owned firm employs fewer workers with public service motivation.The converse of above argument is also true: the private firm chooses small P  and the stateowned firm chooses large S  , when the private firm is inefficient (  is large).Therefore, in the converse case, the stateowned firm employs more workers with public service motivation than does the private firm. Our result can provide an explanation for why some literature reports significant difference in public service motivation between employees in the private and state-owned sectors and other literature does not.Since each study uses data from a different industry, the efficiency gap between private and state-owned firms is different in each study.With regard to our results, in order to obtain robust evidence about the relationship between the type of firm (private versus state-owned) and the level of public service motivation in workers, we need to control for the strategic effect of hiring policy.If we used data on the efficiency gap between private and state-owned firms and estimated the relationship between the type of firm and the level of public service motivation of workers, we could remove this strategic effect of hiring policy from the estimated results. Conclusion We consider a simple model and show that the equilibrium tendency of employing workers with public service motivation depends on the difference between public and private firms' efficiency levels.Hence, we provide the answer for why previous literature on public service motivation cannot obtain robust evidence on the difference between workers in the public and private sectors.Since we focus on the strategic reason for employing workers with public service motivation, we use a gametheoretic model.However, every industry is not oligopolistic.Therefore, in order to answer our question in competitive industries, we should create a model without strategic interaction.Although this issue is important, it is a consideration for future research.
2,049.6
2013-10-15T00:00:00.000
[ "Economics" ]
Conference Report: THE AUTOMATIC RADIO FREQUENCY TECHNIQUES GROUP CONFERENCE ON CHARACTERIZATION OF BROADBAND TELECOMMUNICATIONS COMPONENTS AND SYSTEMS Denver, CO June 13, 1997 or office, the “return-path” or “upstream” data requirements are growing rapidly in markets such as telecommuting and video conferences. To satisfy the demand, the major telecommunications players are rushing to establish high bandwidth, two-way digital networks for video, telephony, and Internet to homes and businesses. One major techn ology expected to play a key role as a future broadband telecommunications system is twoway digital transmissions over coaxial cable. An advantage of such systems is that much of the infrastructure is currently in place for analog cable television systems. The major upcoming competitors to coaxial systems are based on wireless transmission. What these have in common is radio frequency (RF) techn ology.Successful deployment relies on RF components and subsystems of exceptionally high performance. The Automatic Radio Frequency Techniques Group (ARFTG) annually sponsors two conferences on critical topics in RF techn ology, with a focus on measurement issues. For its 49th Conference, held in Denver, CO on June 13, 1997, ARFTG chose the theme “Characterization of Broadband Telecommunications Components and Systems” to address the critical RF technology issues of broadband communications, needs that have not been directly addressed by other microwave or radio frequency symposia. The conference, which was cosponsored by the Microwave Theory and Techniques (MTT) Society of the Institute of Electrical and Electronics Engineers (IEEE), featured 18 technical talks, 11 poster presentations, and a product exhibition. It proved to be popular, as the registration level of 205 broke the ARFTG record of 168. Attendees came at least 18 nations: Australia, Belgium, China, Finland, France, Germany, Italy, Japan, Korea, the Netherlands, Norway, Canada, Poland, Singapore, Sweden, Taiwan, the United Kingdom, and the United States. Dr. Roger Marks of NIST was the Conference Chair. Oral Sessions The Technical Sessions, under the direction of Conference Technical Program Chair Dr. Gary Alley of Lucent Technologies, included 18 talks and 11 poster presentations. The oral sessions were presented in the traditional ARFTG single-track format. The morning talks focused on coaxial systems and the afternoon on wireless. Broadband Coaxial Systems Over the past 7 years, community antenna television (CATV, or cable television) systems have been evolving from one-way transmission of NTSC (U.S. Standard) video signals using long cascades of coaxial amplifiers and directional couplers into bidirectional transmission of NTSC video and digital signals on Hybrid Fiber-Coaxial (HFC) distribution systems. The use of linear lightwave transmitters and receivers has resulted in improved performance for NTSC video transmission but has increased the degradation in digital performance due to laser clipping caused by peaks produced by the broadband multichannel NTSC video waveform. HFC systems being deployed today typically transmit signals from the headend to the home in the 50 MHz to 750 MHz portion of the spectrum while transmitting digital signals from the home to the headend in the 5 MHz to 40 MHz band. The downstream band from 50 MHz to 550 MHz is generally used for up to 83 NTSC video channels while the spectrum from 550 MHz to 750 MHz is reserved for a mixture of digital services including telephony, video telephony, compressed digital video, and Internet access. It has been known for many years that the performance of the upstream portion of these systems, at 5 MHz to 40 MHz, was limited by ingress into the coaxial portion of the system. The sources of this ingress include impulse noise due to lightning and power line currents, intermodulation noise due to the downstream signals in the 50 MHz to 750 MHz band, and coupling of RF broadcast signals into the cable system. The effect of these interfering signals on upstream digital transmission is the subject of current research. The conference talks on broadband coaxial systems were: • RF Measurements for Broadband Networks, S. Fluck (Hewlett-Packard Co., Santa Rosa, CA) The morning sessions led off with an invited keynote address by Syd Fluck of Hewlett-Packard. Fluck discussed the evolution of CATV systems from the 1960s through the present. The introduction of digital transmission on cable systems resulted in the need to charac-terize the digital performance of these systems in their commercial environment. Current test sets and methods required to accomplish this task were discussed. Williams discussed the test and maintenance needs of the reverse plant, as well as a set of nontraditional burst-mode tests. These tests used a high speed analogto-digital converter to capture test and data signals which were then analyzed using a personal computer located at the headend. Examples of data collected and analyzed using these techniques were presented. This paper presented methods for characterizing both ingress and impulse noise in the upstream portion of the system. These methods were independent of the modulation and access techniques used in the systems. Examples were presented using data collected in the field. Bianchi discussed the digital system performance of an operational HFC CATV system using measured data from both upstream and downstream 4 and 64 Quadrature Amplitude Modulation (QAM) modems. The resulting performance is found to vary with respect to time, to signal amplitude, and to direction of transmission. In the forward path, the effect of background impairments varies with time and signal power. In the return path, the effect of interferer-like impairments limits the system performance. Statistical bit-error-rate (BER) data were presented along with BER and signalto-noise (S /N ) data which show the variation in system performance with time. Steel presented a paper which discussed the need for characterization of cable network elements for use in digital cable networks. The paper focused on the characterization of nonlinear, frequency-dependent network elements by using a combination of filter blocks and nonlinear blocks. The 2nd and 3rd order distortion performance of CATV amplifiers was presented as an example of the technique. Alley discussed a technique for minimizing the peak-to-RMS values of the broadband multichannel NTSC video waveform while minimizing both the 2nd and 3rd order distortion products. This was accomplished by optimally controlling the phases of the NTSC video carriers. Both theoretical and experimental results were presented. This paper was selected to receive the conference's Best Paper Award. • CATV Tap and Splitter Linearity Improvement for Broadband Information Networks, M. W. Goodwin (Lucent Technologies, N. Andover, MA) Goodwin presented an analysis of the source of the intermodulation distortion in HFC CATV systems produced by CATV taps and splitter/combiners, as well as a method for improving the linearity of these devices. Broadband Wireless Systems Broadband wireless access is emerging as an alternative method of providing a high capacity digital channel to and from the home. Multichannel Multipoint Distribution Systems (MMDS, or "wireless cable") have existed for some time, offering up to 33 analog video channels in the 2.150 GHz to 2.682 GHz band. The current direction with MMDS is the introduction of compressed digital video in an effort to increase the capacity of these systems and make them more competitive with cable and satellite video distribution systems. The FCC's allocation of over 1 GHz of bandwidth between 27.5 GHz and 31.3 GHz has stimulated interest in Local Multipoint Distribution Systems (LMDS). The bandwidth available is expected to provide high capacity two-way wireless services to the home. Wireless systems are physically easier to deploy than wired systems but still present significant challenges to the system provider. Issues such as co-channel adjacent-cell interference and multipath propagation with the resulting finite coherence bandwidth must be addressed. Yang presented a novel technique which has been developed to optimize power monotithic microwave integrated circuit (MMIC) performance by using onwafer pulsed power tests. The tests are used to determine the functionality of the device and allows the MMIC chip performance to be optimized through manual bias tuning at the module level. Interactive Forum The Conference included 11 poster papers in an Interactive Forum. While these papers addressed significant problems in ARFTG's traditional field of microwave measurements, most did not directly address the primary conference theme. These papers were: Joint Session on Crosstalk On Thursday, June 12, ARFTG cosponsored a session of the 1997 IEEE MTT-S International Microwave Symposium. The session, entitled "Crosstalk, Coupling, and Multiconductor Transmission Line Characterization," was organized and chaired by Dr. Dylan F. Williams of Summary Broadband telecommunications systems are undergoing rapid development and will soon begin to make a significant thrust into consumer and industry markets. Advances in radio frequency technology are critical to the cost and timing of this potential economic and sociological revolution. Proceedings The 49th ARFTG Conference Digest, which includes 36 papers in 251 pages, was distributed at the conference. Ordering information is available on the ARFTG web site or from the ARFTG Executive Secretary (+1-602-839-6933). Future Conferences The 51st ARFTG Conference will be held in Baltimore on Friday, June 12, 1998 in conjunction with the 1998 IEEE MTT-S International Microwave Symposium. The meeting topic is "Characterization of Spread Spectrum Components and Systems." In order to continue its focus on broadband telecommunications, ARFTG will cosponsor a Joint Session with the 1998 IEEE MTT-S International Microwave Symposium on the topic "Broadband Telecommunications Systems" during the week of June 8 in Baltimore. A second Joint Session will cover "Digital Interconnection Techniques and Characterization at GHz Frequencies." In addition to its odd-numbered conferences in the spring, ARFTG presents an even-numbered conference each fall during the week after Thanksgiving. The 50th ARFTG Conference, on "Measurement Techniques for Digital Wireless Applications," will take place on December 4-5, 1997 at the Benson Hotel in Portland, OR. ARFTG and NIST will also present their fourth annual Microwave Measurements Short Course on December 2-3. More Information More information on ARFTG and its conferences is available on the ARFTG Web Site at http:// www.arftg.org.
2,252.6
1997-11-01T00:00:00.000
[ "Computer Science", "Engineering" ]
17β-Estradiol Regulates the Sexually Dimorphic Expression of BDNF and TrkB Proteins in the Song System of Juvenile Zebra Finches Mature brain derived neurotrophic factor (BDNF) plays critical roles in development of brain structure and function, including neurogenesis, axon growth, cell survival and processes associated with learning. Expression of this peptide is regulated by estradiol (E2). The zebra finch song system is sexually dimorphic – only males sing and the brain regions controlling song are larger and have more cells in males compared to females. Masculinization of this system is partially mediated by E2, and earlier work suggests that BDNF with its high affinity receptor TrkB may also influence this development. The present study evaluated expression of multiple forms of both BDNF and TrkB in the developing song system in juvenile males and females treated with E2 or a vehicle control. Using immunohistochemistry and Western blot analysis, BDNF was detected across the song nuclei of 25-day-old birds. Westerns allowed the pro- and mature forms of BDNF to be individually identified, and proBDNF to be quantified. Several statistically significant effects of sex existed in both the estimated total number of BDNF+ cells and relative concentration of proBDNF, varying across the regions and methodologies. E2 modulated BDNF expression, although the specific nature of the regulation depended on brain region, sex and the technique used. Similarly, TrkB (both truncated and full-length isoforms) was detected by Western blot in the song system of juveniles of both sexes, and expression was regulated by E2. In the context of earlier research on these molecules in the developing song system, this work provides a critical step in describing specific forms of BDNF and TrkB, and how they can be mediated by sex and E2. As individual isoforms of each can have opposing effects on mechanisms, such as cell survival, it will now be important to investigate in depth their specific functions in song system maturation. Introduction Brain-derived neurotrophic factor (BDNF) is critical for diverse aspects of brain development and function, including cell survival, axon guidance, synaptic connectivity, dendritic arborization, longterm potentiation, and memory consolidation. The peptide is synthesized via precursors, prepro-then pro-BDNF, which is cleaved and secreted in the mature form. This secretion can occur in a regulated, activity dependent manner from either axons or dendrites (thus having anterograde or retrograde action), or via more passive, constitutive mechanisms [1,2,3]. BDNF binds to two types of receptors in the brain, with highaffinity to tyrosine kinase B (TrkB; [4,5,6]) and with low-affinity to the p75 receptor [7,8]. All neurotrophins bind to the p75 receptor [9], thus its functions are not specific to BDNF. TrkB is more selective; it is the high affinity receptor for BDNF and neurotrophin-4 (NT-4). Isoforms of TrkB exist. The full-length form (TrkB-FL) contains a cytoplasmic domain that activates a variety of signaling cascades [10]. It is through this receptor that the vast majority of the enhancing effects on neuronal structure and function are elicited. However, an alternatively spliced variant (truncated; TrkB-T) lacks this intracellular portion and generally inhibits BDNF action (reviewed in [11]; see below). Steroid hormones and BDNF interact. In particular, estradiol (E2) increases expression of BDNF mRNA and protein selectively in vivo and in vitro [12,13,14,15]. mRNAs for estrogen receptors are co-expressed with BDNF and/or TrkB in a variety of forebrain regions in the developing rodent [15,16,17,18]. While E2 does not appear to modulate TrkB expression in some situations (e.g., developing male rat hippocampus [15]), E2 does increase TrkB protein in hypothalamic neuronal cultures from male rat brains, which is necessary for estrogenic effects on axon growth [19]. The song control system of zebra finches has long been an important model for investigating the effects of E2 on development of neural structure and function. Only males of this species sing, and most of the brain regions that control song learning and production are larger in males compared to females [20,21]. Song control regions include the lateral magnocellular nucleus of the anterior nidopallium (LMAN) and Area X, which are critical to song development, and the HVC (used as a proper name) and robust nucleus of the arcopallium (RA), which are involved in the motor production of song. E2 treatment in female zebra finches during the first few weeks after hatching can masculinize song control nuclei (particularly HVC, RA and Area X) by increasing cell number and size, as well as the volume of those areas. Developmental treatment with E2 also enables females to sing in adulthood. However, the E2 alone cannot fully masculinize the song system in female zebra finches [20,21], suggesting that other factors may be involved in the process of sexual differentiation. This notion is supported by studies that failed to prevent masculinization of song nuclei and the development of singing behavior by inhibiting the availability or action of estrogen in the early life of male zebra finches [22,23,24,25,26,27,28,29,30]. These studies also raise some questions about the role E2 might play in developing males. At the juvenile stages investigated, both plasma levels and the capacity for neural synthesis of the hormone are generally equivalent in the two sexes (reviewed in [20]). One possibility is that E2 serves to increase BDNF protein, which subsequently contributes to the masculinization process. Previous work has indicated that E2 treatment of juvenile males and females results in an increase of BDNF mRNA in HVC. Moreover, inhibition of estrogen synthesis blocks an increase of BDNF mRNA expression seen in males in this region between post-hatching days 25 and 35 [31]. Sex chromosome genes may also be strong possibilities for facilitating masculine development [32]. Male birds are ZZ, and females ZW. Because dosage compensation in birds is limited, the expression of Z-genes is greater in males compared to females [33], including within specific song control nuclei [34,35,36,37,38,39,40]. TrkB is on the Z-chromosome, and its mRNA exhibits higher expression in the song system of developing males [41]. TrkB protein was also detected in the RA of males at 15-20 days of age [42], and across the song system at later developmental stages [43]. Up-regulation of this receptor could provide a mechanism for increased BDNF action in song system masculinization. Thus, BDNF may facilitate masculinization of zebra finches via two mechanisms. E2 may increase availability of the protein (which might occur in both sexes to some extent) and higher expression of TrkB in males may increase its ability to act. To further elucidate mechanisms regulating masculinization, the effect of E2 on BDNF was investigated in juvenile male and female zebra finches. We evaluated protein, both the number of cells expressing BDNF and its relative concentration, across the forebrain song control regions. Because potential effects of E2 on TrkB have not been reported, we also investigated this protein in song nuclei using Western blot analyses to distinguish between the full length and truncated forms of TrkB. Animals Zebra finches were raised in walk-in colonies, containing approximately 7 adult males and females and their offspring. The birds were exposed to a 12:12 light:dark cycle. Finch seed and water were continuously available, and a mixture of bread and hard-boiled chicken eggs, as well as spinach and orange, were provided once a week. Nests boxes were checked daily; the day a hatchling was found was post-hatching day 1. All procedures were conducted in accordance with NIH guidelines and approved by the Michigan State University IACUC. Hormone treatment and tissue collection Males and females each received a subcutaneous implant of either 17b-estradiol (Steraloids, Welton, NH) or a blank pellet (n = 6 per sex per treatment) on post-hatching day 3 (as in [37]). Hormone implants were produced using a 1:5 mixture E2 and silicone sealant (Dow Corning, Midland, MI) that was expelled in a line through a 3-cc syringe onto wax paper and dried overnight. The mixture was cut into 1 mm lengths and then quartered, so that each implant contained approximately 100 mg of E2. Control blank pellets (BL) were produced identically, except that they did not contain the hormone. Each animal was rapidly decapitated at 25 days of age. Brains were separated into two hemispheres, frozen in cold methyl-butane and stored at 280uC. The left and right sides were randomly selected for use in Western blot analyses and immunohistochemsitry. Sex was determined at this time by visual examination of the gonads. For Western blots, one hemisphere of each brain was sectioned at 50 mm, and punches (ranged from 9 to 25 depending on song nucleus size) were collected individually with a stainless steel cannula (0.5 mm diameter; Stoelting Co., Wood Dale, IL) from each section in which HVC, RA, LMAN, and Area X could be identified ( Figure 1). These regions are readily identified based on a variety of visual characteristics, including surrounding landmarks such as fiber tracts and the lateral ventricles, and differences in color and consistency compared to surrounding tissue. The first three areas were collected from both sexes, but as Area X cannot be detected in control females, a comparable portion of the medial striatum (MSt) was obtained from control females based on landmarks. Tissue was expelled into 100 ml of RIPA lysis buffer (sc-24948; Santa Cruz Biotechnology, Santa Cruz, CA) and stored at 220uC until protein extraction. The remainder of each section was Nissl-stained to confirm accuracy of the punches ( Figure 1). For immunohistochemistry, the remaining hemisphere from each bird was cut frozen at 20 mm into six alternate series of sections. Slides were stored at 280uC with desiccant until processing. BDNF and TrkB Western Blot Analyses The specificity of BDNF antibody was examined using Western blot analyses with 30 mg total protein from the whole telencephalon of two juvenile male zebra finches. Two bands, representing the 38 KDa proBDNF and 14 KDa mature BDNF, were recognized by the antibody (see below). Labeling on both Western blots and in tissue sections used for immunohistochemistry was completely eliminated when the primary antibody was preadsorbed with the peptide against which it was raised, as well as when the BDNF primary antibody was omitted ( Figure 2). Specificity of the TrkB antibody was previously verified [43]. Experimental analyses were completed on protein extracted from the samples of HVC, RA, Area X (or an equivalent portion of the MSt in control females), and LMAN of each individual using RIPA lysis buffer per manufacturer's instructions. Concentrations were quantified with the Bradford method (Bio-Rad; Hercules, CA). Samples (8 mg of total protein, which was the maximum available for some individuals) from each brain region were run on three precast gels (Any kD Mini-PROTEAN TGX; Bio-Rad; Hercules, CA) with each gel containing 2 samples from each group and a ladder for size determination (Precision Plus Dual Color Standard; Bio-Rad). The protein was then transferred to PVDF membranes at 4uC. Membranes were simultaneously treated with the SuperSignal Western Blot Enhancer kit (Thermo Scientific; Rockford, IL) according to manufacturer's instructions, and blocked with SuperBlock Blocking Buffer (Thermo Scientific; Rockford, IL) for 60 minutes at room temperature to eliminate non-specific binding. Membranes were then incubated with the BDNF (N-20) primary antibody (1 mg/ml; sc-546; Santa Cruz Biotechnology, Santa Cruz, CA) in primary antibody diluent from SuperSignal Western Blot Enhancer kit overnight at 4uC. Horseradish peroxidase-conjugated goat anti-rabbit secondary antibody (1:5,000; Cell Signaling, Danvers, MA) was applied to the membranes at room temperature for 1 hour. Immunoreactiv-ity was detected by chemiluminescence (SuperSignal West Pico, Thermo Scientific; Rockford, IL) followed by exposure to HyBlot CL autoradiography film (Denville Scientific Inc.; Metuchen, NJ). The membranes were stripped in Restore Plus Western Blot Stripping Buffer (Pierce; Rockford, IL) per manufacturer's instructions, washed in 1X PBS-Tween 20, and re-probed for actin as a loading control (0.5 mg/ml, sc-1615; Santa Cruz Biotechnology; Santa Cruz, CA) overnight at 4uC. They were then incubated with HRP-conjugated donkey anti-goat secondary antibody (1 mg/30 ml, sc-2020; Santa Cruz Biotechnology, Santa Cruz, CA) for 1 hour at room temperature, and the reaction product was visualized as above. Finally, each membrane was stripped again as described above and re-probed with the TrkB primary antibody (1 mg/ml, sc-12; Santa Cruz Biotechnology; Santa Cruz, CA) followed by goat anti-rabbit-HRP secondary antibody (1:5,000; Cell Signaling, Danvers, MA). The 38 KDa proBDNF band was expressed in all the samples. However, the 14 kDa mBDNF band was only detectable in some (Table 1), even after prolonged exposure to film (1 hour for HVC; 2-2.5 hours for LMAN and RA; Area X/MSt never produced a clear signal). The long exposure required to detect mBDNF bands resulted in high background, so the relative optical density could not be quantified. Therefore, only the ratio of proBDNF to actin from each sample was analyzed for each brain region mentioned above. While the truncated form of TrkB (TrkB-T, 95 KDa) was detected in each brain region examined, the full length TrkB (TrkB-FL, 145 KDa) was consistently observed in HVC only. Therefore, the ratio of TrkB-T/actin was analyzed in all song nuclei and the ratio of TrkB-FL/actin was quantified only in HVC. The mean optical density for each band of interest was quantified using Image J (NIH). A value was also obtained for an immediately adjacent region of the same size ( Figure 2), which was subtracted to control for background. The ratio of proBDNF or TrkB to actin was then calculated and analyzed by two-way-ANOVA (sex x treatment) in each brain region, followed by pairwise planned comparisons as appropriate. A few samples could not be included in statistical analyses due to a poor film image; final sample sizes are included in the figures. BDNF Immunohistochemistry One set of slides from each animal was warmed to room temperature, rinsed in 0.1 M phosphate-buffered saline (PBS), fixed in 4% paraformaldehyde for 15 minutes, and washed 3 times in PBS. Slides were exposed to 0.9% H 2 O 2 /methanol for 30 minutes and incubated for 30 minutes in 3% normal goat serum in PBS with 0.3% Triton X-100. The tissue was then incubated in 0.1 M PBS containing 0.3% Triton X-100, 3% NGS and BDNF primary antibody (0.5 mg/ml; same as used for western blot) overnight at 4uC. A biotin-conjugated goat anti-rabbit secondary antibody (1 mg/ml; Vector Labs, Burlingame, CA) was then applied for 1.5 hours at room temperature, followed by treatment with Elite ABC reagents and diaminobenzidine (DAB) with 0.0024% hydrogen peroxide to produce a brown reaction product. Slides were then rinsed in PBS to be sure the reaction was terminated. An adjacent set of slides from each animal was stained with Cresyl violet to localize the song control nuclei. Slides were coverslipped with DPX (Fluka, St. Louis, MO) after dehydration in a graded series of ethanols. Stereological Analyses LMAN, HVC and RA were analyzed in both males and females. However, Area X cannot be detected with a Nissl stain in female zebra finches, and borders also could not be distinguished with BDNF labeling. Therefore, labeled cells in this region were only quantified in males and E2-treated females. As a control, we analyzed BDNF expression in nucleus rotundus (RT), a sexually monomorphic thalamic nucleus [44,45] to determine whether the effects of E2 were specific to song nuclei. Regions of interest from each animal were analyzed under brightfield illumination using StereoInvestigator software (Microbrightfield Inc., Williston, VT) by an individual blind to sex and age of the animals. The border of each song nucleus was defined by tracing its edge throughout its rostrocaudal extent. All cells exhibiting neuronal morphology and clear reaction product for BDNF were manually counted in regions selected by the Optical Fractionator function [39]. Due to tissue quality, data for all brain regions could not be obtained from every individual. Final sample sizes are indicated in the figures. Within HVC, RA, LMAN and RT, the estimated total number of BDNF+ cells was analyzed by 2-way ANOVA. Main effects of sex and treatment, as well as potential interactions between the variables were assessed. Planned, pairwise comparisons were conducted when sex x treatment interactions existed. Because the region is not detectable in control females, in Area X the estimated total number of BDNF+ cells was analyzed by one-way ANOVA among control males, E2 treated males and E2 treated females. Results proBDNF western blot analyses LMAN. A significant main effect of treatment was detected (F 1,18 = 4.610, P = 0.046), with an increase in pro-BDNF/actin ratio in E2-treated compared to control animals ( Figure 3, top). Neither a main effect of sex nor an interaction between sex and treatment was found (both F 1,18 ,0.686, P.0.418). HVC and Area X (MSt). No effects of sex, treatment or interaction were observed in these remaining areas (all F,3.494, P.0.077; data not shown). TrKB western blot analyses LMAN. No main effects of sex (F 1,19 ,0.001, P = 0.991) or treatment (F 1,19 = 0.345, P = 0.564) were found on the TrkB-T/ actin ratio. A significant interaction between sex and treatment was detected (F 1,19 = 4.657, P = 0.044), which suggested that E2 might have different effects in males and females. However, pairwise comparisons indicated only trends for significant differences ( Figure 4, top). RA. A significant main effect of treatment was detected in the ratio of TrkB-T/actin (F 1,17 = 6.308, P = 0.022). This value was increased in E2-treated birds compared to control animals ( Figure 4, bottom). Neither a main effect of sex nor an interaction between sex and treatment was detected (both F 1,17 ,0.074, P.0.789). HVC. For the ratio of TrkB-FL/actin ( Figure 5, top), a main effect of treatment was detected with E2 increasing the value compared to the control (F 1,20 = 6.864, P = 0.019). There was no main effect of sex (F 1,20 = 0.075, P = 0.788) and no interaction between sex and treatment (F 1,20 = 0.050, P = 0.825). For the ratio of TrkB-T/actin ( Figure 5, bottom), a significant main effect of sex was detected, such that it was greater in females than males (F 1,17 = 8.241, P = 0.011). An interaction between sex and treatment was also detected for TrkB-T/actin (F 1,17 = 4.455, P = 0.050). Pairwise comparisons revealed a higher level in control females than control males (t 9 = 10.537, P = 0.010). Area X (MSt). No significant effects of sex, treatment or interaction between sex and treatment were found in TrkB-T/ actin ratio (all F 1,19 ,0.923, P.0.349; data not shown). Estimated total number of cells labeled with BDNF Antibody LMAN. A significant main effect of sex was detected (F 1,20 = 4.807, P = 0.040), with females having more BDNF+ cells than males. Main effect of treatment also existed (F 1,20 = 4.743, P = 0.042); E2 decreased the number of BDNF+ cells compared to the controls ( Figure 6). There was no interaction between sex and treatment (F 1,20 = 0.457, P = 0.507). Area X. The estimated total number of BDNF+ cells was equivalent across the three groups (F 2,15 = 1.548, P = 0.245; data not shown). Summary A number of effects of both sex and E2 were detected for BDNF. In both the HVC and RA, males had more cells expressing this protein, and E2 masculinized this characteristic in females. The results were opposite in LMAN, with more BDNF+ cells detected in females and E2 decreasing this value (in both sexes). As with other markers [20,21,39,46], Area X could not be detected in control females in the present study. However, E2 induced a visible Area X defined by BDNF labeling. Mature BDNF was consistently detected by Western blot, particularly in HVC and RA, but it could not be quantified due to limited protein availability; it was difficult to obtain sufficient signal with low enough background. However, proBDNF could be readily measured, and the patterns differed from those on BDNF+ cell number. Relative proBDNF concentration in RA was greater in males than females and was increased by E2 in females. This up-regulation by E2 was also seen in LMAN. The present results provide novel information regarding BDNF on at least two levels. First, the HVC, RA and LMAN of unmanipulated, 25-day-old animals of both sexes express BDNF, as does the Area X of control males. Second, the availability of the protein can be modulated by E2, although the specific nature of the regulation depends on brain region, sex and form of the protein. The difference in the pattern of results between the Western blot analysis and immunohistochemistry suggests that at least some of what was quantified stereologically was mature BDNF. However, it is also possible that estimation of numbers of cells expressing this (or any) protein does not directly parallel its local concentration. Like BDNF, the relative concentration of TrkB exhibited different patterns among the song nuclei. TrkB-T was significantly higher in control females than control males only in HVC. E2 had no effect in this region but increased the receptor in the RA of females. In contrast, no effect of sex was detected on the full-length form of this receptor (TrkB-FL, which was only detected consistently in HVC), and E2 significantly increased its expression in both males and females. The full-length vs. truncated forms of the TrkB receptor had not been distinguished in songbirds. The current data indicate that at least some of what has been detected in juveniles by immunohistochemistry across HVC, RA, LMAN and Area X is the truncated form. The data also indicate that expression of both isoforms can be increased by E2, depending on brain region. Functions of BDNF and TrkB isoforms Mature BDNF positively regulates a range of effects on the central nervous system. These include increasing the proliferation, migration, survival and differentiation of neurons, as well as increasing synaptic plasticity. These actions occur via the full length TrkB receptor [47,48,49]. In contrast, proBDNF itself can be released either by cultured neurons in vitro [50,51,52] or by central neurons in vivo [53], suggesting it may have specific functions in the brain. ProBDNF has a variety of negative functions, including promoting cell death, decreasing dendritic spine density, inhibiting neuronal migration, and attenuating synaptic transmission; these effects occur via the p75 pan neurotrophin receptor [52,53,54,55]. Similar to the pro-and mature forms of BDNF, the full-length and truncated forms of the TrkB receptors have largely opposite effects. The truncated form can bind and internalize/sequester BDNF, but does not undergo autophosphorylation or function as a tyrosine kinase receptor. TrkB receptors dimerize; full-length homodimers mediate the neurotrophic effects of BNDF. Heterodimers of full-length and truncated receptors have dominantnegative functions that inhibit signaling. Thus, the truncated form of the TrkB receptor generally serves to inhibit BDNF activity. However, independent of this ligand, homodimers of TrkB-T can induce some neurite outgrowth via mechanisms that are not well understood [11]. BDNF and TrkB in the Song System A variety of supportive roles of BDNF have been documented in songbirds. Infusion of this neurotrophin into RA during development prevents cell death following removal of presynaptic input by lesioning LMAN [42]. Injection into the adult zebra finch RA introduces song plasticity, suggesting that BDNF regulates variability in a manner critical to learning [56]. In adult canaries, which unlike zebra finches exhibit seasonal changes in song and morphology of song control regions, TrkB is present in the HVC of both sexes, and BDNF protein is in the HVC of males only, where it is involved in the regulation of neuronal replacement [57]. In male canaries, HVC BDNF mRNA is up-regulated by singing, and in parallel, the survival of new neurons is increased in singing birds [58]. Similar results are seen in white-crowned sparrows [59]; BDNF mRNA is increased in HVC by the long days typical of spring breeding conditions. Data from infusing and inhibiting BDNF in RA indicate its importance for seasonal plasticity of the song system in this species as well. While more work on BDNF function has been done in adulthood than development, collectively the data suggest the potential for BDNF acting at TrkB receptors to regulate a variety of aspects of structure and/or function of the song system. Several studies have localized BDNF and TrkB in the developing song system. Unfortunately, a number of inconsistencies exist across the results, perhaps in part due to differences in methodology. BDNF mRNA has been detected in the HVC of males, but not females, at 30-35 days post-hatching. This expression was increased by E2. In contrast, BDNF mRNA was not detected in the RA of juveniles of either sex, and levels in LMAN were reported as being very low, but perhaps increased compared to surrounding tissue; this labeling was not in RAprojected LMAN cells [31]. Cell bodies containing BDNF protein were detected in the LMAN but not RA of 15-20 day old males; fibers were reported in RA of these birds [42]. Akutagawa and Konishi [60] reported BDNF protein in the HVC of males at day 20 and RA at day 45. BDNF emerged in the LMAN and Area X of males between days 45 and 65. This study reported very little BDNF immunoreactivity in the song system of adult males. In contrast, Johnson et al. [61] suggest that the labeling with this antibody in RA is due to a technical artifact and that BDNF in HVC is comparable to surrounding telencephalic tissue and equivalent in juvenile and adult birds. One antibody to the extracellular domain of TrkB indicated cell bodies, neuropil and fibers in males at d15-20 in RA [42]. Another antibody, the one used in the present paper which also recognizes both isoforms, revealed labeling in somata and neuropil that appeared to define the HVC and RA in males from days 30-60, and in females on days 45 and 60. Scattered cells were also detected in LMAN. However, consistent labeling was not detected in the song system before these ages [43]. TrkB mRNA has been documented in the RA, HVC, and LMAN of juveniles in both sexes [31]. It also is expressed higher levels in the forebrain of males compared to females in the first week after hatching. Within HVC specifically, the mRNA expression is higher in males compared to females as early as 6 days after hatching [41]. Synthesis of Existing Data In the present study, relative proBDNF levels within HVC were unaffected by sex and E2 treatment, as determined by Western blot analysis. However, more cells expressing BDNF protein were detected in males than females, and E2 increased this measure in females only. The relationship between proBDNF concentration in protein homogenized from much of a song nucleus and the number of cells in which multiple forms of BDNF may be detected in the entire region is not completely clear. However, the divergence in the data suggest that one interpretation is that the sex and treatment effects in HVC reflect increases in mature BNDF, and not proBDNF, in males and E2-treated females. At the age our data were collected (post-hatching day 25), males already have more cells in HVC than females [62]. Thus, the sex difference we detected by immunohistochemistry could simply reflect a difference in cell survival or addition caused by other factors. The sex-specific E2 effect is consistent with the idea that BDNF up-regulation may be one mechanism by which E2 increases cell survival in masculinization of the female HVC. Alternatively, it is possible that a general increase in HVC cell number due to E2 treatment of females (e.g., [20]) is regulated by an independent mechanism, and the enhanced expression of BDNF that we detected simply reflects the survival of these cells. Future studies should address these ideas. E2's increase of TrkB-FL in HVC across the two sexes provides an opportunity for the function of BDNF to be enhanced, but also suggests that the mechanism underlying the female-specific increase in BDNF+ cells by E2 is not regulated by differential availability of this receptor. In contrast, the greater expression of TrkB-T (which facilitates apoptosis, see above), in females compared to males could generally serve to facilitate the development of sex differences in cell number in HVC. Unlike HVC, in RA the results on BDNF from Western blot analysis and immunohistochemistry were parallel. The fact that relative proBDNF concentration and the estimated total number of cells expressing BDNF were both greater in males than females, and that E2 increased both measures in only females, suggest that what was detected by immunoshistochemistry may have largely been proBDNF. Alternatively, the pattern of mature BDNF expression across groups may mirror that of proBDNF. Further work is needed to distinguish between these possibilities. Regardless, the fact that overall cell number in RA does not diverge between the sexes until after day 25 [62] suggests that sex differences in BDNF labeling by immunohistochemistry are not the passive result of the cell loss that begins to occur around this time. In contrast, greater expression of BDNF might actively promote greater survival of cells in the RA of males compared to females, and E2 might masculinize this function in females. In addition to directly testing these hypotheses, it will be important to determine why E2 affected BDNF in only females both in RA and HVC (see above). One possibility is that males' responses to endogenous E2 may have already been maximized so no further change due to exogenous E2 was exhibited. Another possibility is that estrogenic up-regulation of BDNF is modulated by activity of one or more sex chromosome genes. Both Z and W genes exhibit differential expression between the sexes, including the brain [32,34,35,36,63], so numerous candidates are plausible. As noted in the Introduction, TrkB is on the Z-chromosome, and its mRNA is increased in developing males compared to females, at least in HVC. However, the present data do not support the idea that either form of the TrkB protein is a key contributor to masculinization in HVC or RA. In RA, only TrkB-T could be quantified, and expression of this protein was not sexually dimorphic. In HVC, the relative concentration of TrkB-FL was equivalent between the sexes, and TrkB-T was increased in females. Thus, while this truncated isoform could be critical for feminization (or demasculinization), it is an unlikely candidate for enhancing structure or function of HVC. This truncated isoform largely inhibits BDNF functions such as cell survival, and complementarily BDNF negatively regulates TrkB-T mediated cytoskeletal changes, such as dendritic growth [11]. In LMAN, the number of BDNF+ cells was greater in females than males at post-hatching day 25. Unlike other song control nuclei, LMAN exhibits similar volume and dendritic morphology between the sexes at this age. Soma size is also equivalent in males and females until at least day 50 [64]. Thus, while changes in cell number across development of LMAN are not completely clear [64], available information is consistent with the idea that this variable is also equivalent between the sexes at this juvenile stage. It is not obvious whether the larger BDNF+ cell number in females reflects the pro-or mature form, or a combination of both. However, the fact that proBDNF concentration as detected in Western blot analysis did not differ between the sexes whereas immunohistochemically labeled cells did, suggests that this labeling may largely reflect the mature peptide. It is also possible that while females have more proBDNF+ cells they express less per cell, resulting in a lower concentration of proBDNF detected by Western blot analyses. If the mature BDNF is in fact greater in females than males, one intriguing possibility is that this female-biased effect, which does not occur in the other song nuclei analyzed, is what keeps LMAN morphology at this age similar between the two sexes (for example, see [65]). If factors, perhaps including those encoded on sex chromosomes, generally induce differentiation of neural structure and function, then a larger number of BDNF cells in the LMAN of females might prevent demasculinization that would have otherwise occurred. Such a process may be particularly important at post-hatching day 25, because at this age both males and females appear to learn characteristics of adult song [66], and LMAN may play a critical role [64]. In this brain region, E2 increased the relative concentration of proBDNF but decreased the estimated total number of BDNF+ cells. These results are difficult to interpret both because of their opposite directions and because little is known about the function of E2 in LMAN development. While just speculation, this pattern is consistent with the possibility that E2 increases BDNF synthesis while also facilitating release of the mature peptide, which could limit detectability by immunhistochemistry. The studies examining the masculinizing effects of this hormone have not quantified features of LMAN morphology or its connections [67,68,69]. However, as early E2 can masculinize song learning [70,71], perhaps effects mediated by forms of BDNF are more behavioral than structural. That TrkB-T exhibited no obvious effect of sex or treatment is consistent with the fact that this brain region will after day 30 shrink in parallel in the two sexes [64]. Finally, in Area X (or the equivalent portion of the MSt in females), Western blot analyses did not reveal effects of sex or treatment on either proBDNF or TrkB-T. However, the fact that BDNF+ cell number was equivalent across control males and E2treated individuals of both sexes indicates that E2 in fact masculinized the number of cells expressing BDNF peptides in females. This effect is consistent with an E2-induced appearance of a distinct Area X with Nissl-staining [20,21,46]. We do not have sufficient information to know whether BDNF is a component of the process through which this morphology is masculinized or whether up-regulation of this peptide is a consequence of the masculinization. However, the absence of effects on both proBDNF and TrkB-T suggests that potential for mature BDNF to be involved in masculinization of Area X. As suggested above, it is possible that males are also normally affected by E2, but exogenous treatment produced no further response above that of the endogenous effect. Future Directions The present data provide some novel information regarding expression patterns of specific isoforms of BDNF and TrkB. Considered in the context of prior work on these molecules and the genes that encode them (see above), it is clear that BDNF and perhaps ligand-independent actions of TrkB-T could play key roles in shaping development of song system structure and/or function. The challenge now is to more fully describe developmental changes in pro-and mature forms of BDNF and the fulllength and truncated forms of TrkB to see how they parallel known changes in morphology and song learning. Then, manipulations of availability and activity can begin to elucidate the specific functional roles of each peptide and how they may interact with E2 to induce sexual differentiation.
8,040.2
2012-08-31T00:00:00.000
[ "Biology" ]
Comparison principles and applications to mathematical modelling of vegetal meta-communities This article partakes of the PEGASE project the goal of which is a better understanding of the mechanisms explaining the behaviour of species living in a network of forest patches linked by ecological corridors (hedges for instance). Actually we plan to study the effect of the fragmentation of the habitat on biodiversity. A simple neutral model for the evolution of abundances in a vegetal metacommunity is introduced. Migration between the communities is explicitely modelized in a deterministic way, while the reproduction process is dealt with using Wright-Fisher models, independently within each community. The large population limit of the model is considered. The hydrodynamic limit of this split-step method is proved to be the solution of a partial differential equation with a deterministic part coming from the migration process and a diffusion part due to the Wright-Fisher process. Finally, the diversity of the metacommunity is adressed through one of its indicator, the mean extinction time of a species. At the limit, using classical comparison principles, the exchange process between the communities is proved to slow down extinction. This shows that the existence of corridors seems to be good for the biodiversity. Introduction This article partakes of a research program aimed at understanding the dynamics of a fragmented landscape composed of forest patches connected by hedges, which are ecological corridors. When dealing with the dynamics of a metacommunity at a landscape scale we have to take into account the local competition between species and the possible migration of species. We are interested here in the mathematical modelling of two species, on two forest patches linked by some ecological corridor. We model the evolution by a splitting method, performing first the exchange process (see the definition of the corresponding Markov chain in the sequel) on a small time step, and then we perform independently on each station a birth/death process according to the Wright-Fisher model, and we reiterate. Our first mathematical result is to compute the limit equation of this modelling when the time step goes to 0 and the size of the population diverges to ∞. This issue, the hydrodynamic limit, i.e. to pass from the mesoscopic scale to the macroscopic one received increasing interest in the last decades (see for instance in various contexts [2], [11], [17]). As our main results on extinction times do not require the convergence in law of the processes, instead of using a martingale problem ( [8]), we prove directly the convergence of operators towards a diffusion semi-group ( [10]). We find a deterministic diffusion-convection equation, where the drift comes from the exchange process, while the diffusion comes from the limit of the Wright-Fisher process. We point out here that the fact that the diffusion operator L d satisfies a non standard comparison principle (or a maximum principle) is instrumental: first the comparison principle ensures the uniqueness of the limit of the approximation process and then the definition of the Feller diffusion process. Then this comparison principle yields our second result that is concerned with the comparison of the extinction time of one species for a system with exchange and a system without exchanges. Assuming that the discrete extinction time converges, we prove that the limit is solution of the equation −L d τ = 1. Taking advantage once again of comparison principles, we prove that the exchange process slows down the extinction time of one species. Thus, the fragmentation of the habitat seems to be good to the biodiversity. This article outlines as follows. In a second section we describe the modelling at mesoscopic scale. We couple a Wright-Fisher model for the evolution of the abundances together with an exchange process. The third section is devoted to the large population limit of the discrete process. In a fourth section we discuss the issues related to the extinction time; we compare the extinction time of one species with and without exchange process. In a final section we draw some conclusion and prospects for ecological issues, and we address the question of convergence in law for our model. 2. The mathematical model 2.1. Modelling the exchange between patches. Consider two patches that have respectively the capacity to host (N 1 , N 2 ) individuals, to be chosen into two different species α and β. Set (y n 1 , y n 2 ) for the numbers of individuals of type α, respectively in patch 1 and 2, at time nδt, i.e. after n iterations and δt is the time that will be defined below. The exchange process is then simply modelled by where κ is the instantaneous speed of exchanges and d = N 2 N 1 represents the distortion between the patches (the ratio between the hosting capacities); we may assume without loss of generality that d ≤ 1. With this modelling, and assuming that κδt ≤ 1, it is easy to check that is mapped into itself, i.e stable, by the exchange process. • The total population of individuals of type α, y n 1 + y n 2 , is conserved. • If we start with only individuals of species α (respectively β) then we remain with only individuals from α (respectively β); this reads (N 1 , N 2 ) → (N 1 , N 2 ) (respectively (0, 0) → (0, 0)). Set x = (x 1 = y 1 N 1 , x 2 = y 2 N 2 ) belonging to D = [0, 1] 2 for the population densities of a species α on two separate patches and x n = (x n 1 , x n 2 ) for these densities at time nδt. Then we have alternatively This reads also x n+1 = Ax n where A is a stochastic matrix. Consider now the piecewise constant càdlàg process with jumps X → AX at each time step δt. In other words, for any continuous function f defined on D = [0, 1] 2 then P ex δt (f )(x) = f (Ax), where P ex δt is the transition kernel of the exchange process. 2.2. Wright-Fisher reproduction model. On each patch we now describe the death/birth process that is given by the Wright-Fisher model. The main assumption is that the death/birth process on one patch is independent of the other one. Consider then the first patch that may host N 1 individuals. The Markov chain is then defined by the transition matrix, written for Since the two Wright-Fisher processes are independent, the corresponding transition kernel reads for any function f defined on D = [0, 1] 2 . Notice that P wf δt is a two-variable version of the usual Bernstein polynomials. In the sequel, we will also use the notation B N (f ) and write for the sake of conciseness 2.3. The full disrete model. Starting from the state x = (x 1 , x 2 ), during a time step, we apply first the exchange process and then the Wright-Fisher reproduction process. In this way, the sequence of random variables x n is a Markov chain with state space 1} and the transition kernel reads as follows From discrete model to continuous one We consider the same scaling as for the Wright-Fisher usual model, that is N 1 δt = 1. We set N = N 1 in the sequel to simplify the notations. We may consider either the càdlàg process associated to the reproduction-exchange discrete process defined by x t = x n if nδt ≤ t < (n + 1)δt or the continuous piecewise linear function x t such that x t = x n for t = nδt. We consider an analogous interpolation in space in order to deal with function that are defined on [0, T ] × D where T > 0 is given. For a given continuous function f that vanishes at (0, 0) and (1, 1), we now define the sequence of functions (5) u We may also use analogously u N (t, The fonctions u N and u N represent the average densities of the species at a macroscopic level. If X N is the Lagrangian representation of the densities, then u N represents the densities in Eulerian variables. and with initial data u(0, x) = f (x). Remark 3.2. We may have proved that the càdlàg process associated to the reproductionexchange process u N converges to a diffusion equation. We will discuss this in the sequel. Besides, we prove the convergence results for a sufficiently smooth f , and we will extend in the sequel the definition of a mild solution to the equation for functions f in the Banach space E = {f ∈ C(D); f (0, 0) = f (1, 1) = 0}. The theory for Markov diffusion process and the related PDE equations is well developed in the litterature (see [1], [9], [16] and the references therein). The particularity of our diffusion equation is that the boundary of the domain is only two points. 3.2. Proof of Theorem 3.1. The proof of the theorem is divided into several lemmata. The first lemma describes in a way how the discrete process is close to a martingale. Lemma 3.3. The conditional expectation of the discrete reproduction-exchange process is Proof Using the properties of the Bernstein polynomials, Then the proof of the lemma is completed, observing that A − Id = o(1). The following lemma is useful to prove that x t and x t are close. Lemma 3.4. There exists a constant C such that , then the following conditional expectation reads We expand the ℓ 2 norm in R 2 as We first have by linearity and by the Lemma 3.3 above that that completes the proof of the lemma. The next statement is a consequence of the inequality |x t − x t | ≤ |x n − x n+1 | for t ∈ (nδt, (n + 1)δt) and of the previous lemma The processes x t and x t are asymptotically close, i.e. there exists a constant C such that As a consequence, when looking for the limit when N diverges towards +∞ of the process, we may either work with x t or x t . The next lemma is a compactness result on the bounded sequence u N defined in (5). Lemma 3.6. There exists a constant C that depends on ||f || lip and on T such that for any, x, y in D and s, t in [0, T ], Remark 3.7. Since the constants C do not depend on N we can infer letting N → ∞ some extra regularity results for u, assuming that f is Lipschitz. Proof We begin with the first estimate. Introduce n such that nδt ≤ t < (n + 1)δt. Set y t for the process that starts from y = y 0 . therefore, proving the first inequality for u N (which amounts to controlling |x n − y n |)) will imply the inequality for u N . Due to the properties of Bernstein's polynomials we have that where ω(f, 1 N ) is the modulus of continuity of f . Then, using that ||A − Id|| ≤ CN −1 , we infer that Iterating in time we have that, The other derivative is similar and then we infer from this computation that the first inequality in the statement of Lemma 3.6 is proved. We now proceed to the proof of the second one. Introduce the integers m, n such that mδt ≤ s < (m + 1)δt and nδt ≤ t < (n + 1)δt. Using that On the one hand, by Lemma 3.4 we have that the first term in the right hand side of (11) is bounded by above by C(m−n) N . On the other hand, using that the E(y j |x j ) = (A−Id)x j then Since ||Id − A|| ≤ CN −1 then the right hand side of (12) is also bounded by above by . This completes the proof of the lemma. Thanks to Ascoli's theorem, up to a subsequence extraction, u N converges uniformly to a continuous function u(t, x). We now prove that u is solution of a diffusion equation whose infinitesimal generator is defined as the limit of N (P wf δt P ex δt − Id). Lemma 3.8. Consider f a function of class C 2 on D that vanish at (0, 0) and (1, 1). Then where L d is defined in Theorem 3.1. Proof Due to Taylor formula Using that the linear operator P wf δt is positive and bounded by 1 we then have The well-known properties of Bernstein polynomials (see [6]) entail that uniformly in x On the other hand, the operator P wf δt is the tensor product of two one-dimensional Bernstein operators. Then by Voronovskaya-type theorem (see [6]), for f ( 2d f x 2 x 2 . By density of the linear combinations of tensor products f 1 (x 1 )f 2 (x 2 ) this result extend to general f as (15) lim Denoting ∆ d the diffusion operator defined by the right hand side of (15), the Kolmogorov limit equation of our coupled Markov process is with initial data u(0, x) = f (x). Let us observe that u, the limit of E(f (x t )|x 0 = x), vanishes at two points (0, 0) and (1, 1) in the boundary ∂D. We now complete the proof of the Theorem. Considering f such that the convergence in Lemma 3.8 holds. Then, for n ≤ tN < n + 1, N (s, .)))(x)ds. Using the uniform convergence of u N , Lemma 3.8 and a recurrence on n we may prove that at the limit where we have omitted the variable x for the sake of convenience. We now state a result that ensures the uniqueness of a solution to the diffusion equation (18). Such a solution is a solution to the diffusion equation in a weak PDE sense. Introduce Remark 3.9. We precise here the regularity of the functions f in D(L d ). Since L d is a strictly elliptic operator on any compact subset of the interior of D then f is C 2 (D) ∩ C(D) (see [12]). The regularity of f up to the boundary is a more delicate issue (see [14], [15]). Besides, to determine exactly what is the domain of L d is a difficult issue. For PDEs the unbounded operator is also determinated by its boundary conditions. Here we have boundary conditions of Ventsel'-Vishik type, that are integro-differential equations on each side of the square linking the trace of the function f and its normal derivative. This is beyond the scope of this article. • Parabolic version: Consider a func- We postpone the proof of this theorem until the end of this section. We point out that a comparison principle for L d is not standard since it requires only information on two points {(0, 0), (1, 1)} in ∂D and not on the whole boundary. Theorem 3.10 implies uniqueness of the limit solution. Therefore the whole sequence u N converge and the semigroup is well defined. Actually, setting S(t)f = u(t) we then have defined for smooth f the solution to a Feller semigroup (see [1]) as follows (1) S(0) = Id. The second property comes from uniqueness, the last one passing to the limit in The third one is then simple. The third property allows us to extend the definition of S(t) to functions in E by a classical density argument. Then we have a Feller semigroup in E that satisfies the assumptions of the Hille-Yosida theorem (see [5]). 3.3. Proof of the comparison principle. We begin with the comparison principle for the parabolic operator. We use that C 2 (D) is dense in D(L d ), i.e. that any function u in D(L d ) can be approximated in E by smooth functions u k up to the boundary, and such that Lu k converges uniformly on any compact subset ofD. We then prove the comparison principle for smooth functions and we conclude by density. Consider u as in the statement of the Theorem for a C 2 initial data f . Consider ε small enough. We then have v t (t 0 , Let us observe that if ε is chosen small enough Second case: x 0 belongs to ∂D but the four corners. We may assume that x 0 = (0, x 2 ) the other cases being similar. We have that We then have as in (20) We now conclude. since v is nonnegative we have Letting ε goes to 0 completes the proof. Let us prove the elliptic counterpart of the result for a smooth function u (we also proceed by density). Set as above v(x) = u(x) + εψ(x) + ε 2 θ(x). Introduce x 0 where v achieves its minimum, i.e. v(x 0 ) = min D v(x).. First if x 0 belongs to the interior of D, then L d v(x 0 ) > 0 and we have a contradiction. We disprove the case where x 0 belongs to the boundary but {(0, 0), (1, 1)} exactly as in the evolution equation case. Assume first that x 0 belongs to ∂D but the four corners; for instance x 0 = (0, x 2 ). Then L d (ψ + εθ)(x 0 ) =< 0 gives the contradiction. Assume then that x 0 = (0, 1). 1) are absorbing states. These two absorbing states correspond to the extinction of a species. Let us introduce the hitting time Θ N that is the random time when the Markov chain reaches the absorbing states, i.e. the extinction time. Since the restriction of the chain to the non absorbing states is irreducible and since there is at least one positive transition probability from the non absorbing states to the absorbing states then this hitting time is almost surely finite. This result is standard for Markov chains with finite state space (see [4], [7] and the references therein). Let U be the complement of the trapping states (0, 0) and (1, 1). Consider the vector T N defined as the conditional expectation (T N ) j N ∈U = E j N (Θ N ) of this hitting time and denote byP N orP the restriction of the transition matrix to U . Then for x ∈ U , denoting P x the conditional probability, we have using Markov property and time translation invariance This is equivalent to We are now interested in the limit of T N when N diverges towards ∞. Let us recall that for the one dimensional Wright-Fisher process the expectation of the hitting time starting from x converges towards the entropy H(x) (see [16]) defined by The entropy is a solution to the equation − x(1−x) 2 H xx = 1 that vanishes at the boundary. The proof, that can be found in Section 10 of [9], uses probability tools like the convergence in distribution of the processes and the associated stochastic differential equation. We believe that the same kind of tools would give the convergence of τ N in dimension two but this is beyond the scope of this article. Besides, for the sake of completeness we provide a proof for the convergence in distribution of our processes in Section 5.2 below. Set now τ N for the polynomial of degree N in x 1 and x 2 that interpolates T N at the points of the grid. We have Theorem 4.1 (Extinction time). When N diverges to +∞ the sequence τ N converges towards τ that is solution to the elliptic equation −L d τ = 1. Assuming the convergence of τ N , the proof of the theorem is straightforward by passing to the limit in (23) using Lemma 3.8. Remark 4.2. We expect the function τ to be smooth up to the boundary but at the two points (0, 0) and (1, 1). We admit here this result. This allows us to use the previous comparison result. 4.2. Exchanges slow down extinction. Consider now a single patch whose hosting capacity is N 1 + N 2 = (d + 1)N for N = 1 δt . The limit equation for the classical Wright-Fisher related process is Then the corresponding extinction time for the Wright-Fisher process without ex- is the corresponding averaged starting density (see [16]) and where H is the entropy defined above (24). We shall prove in the sequel Proof We point out that to check that −L d satisfies the comparison result is not obvious (see Theorem 3.10). We first observe that the entropy (24) vanishes at the boundary points {(0, 0), (1, 1)}. Setting τ (x 1 , x 2 ) = g(z), we have We then have Observing that by a mere computation we have that τ is a subsolution to the equation. 4.3. More comparison results. We address here the issue of the convergence of the limit extinction time τ = τ d,κ defined in Section 4 when κ or d converges towards 0. This extinction time depends on the starting point x. Proof Consider here the function . This function vanishes at x = (0, 0) and x = (1, 1) and satisfies Then V is a subsolution to the equation −L d τ = 1 and by the comparison principle V ≤ τ d,κ everywhere. Letting κ → 0 completes the proof of the Proposition. Proposition 4.5. Assume κ be fixed. Then that is the extinction time for one patch. Proof We begin with . Let us observe that due to (28) The strategy is to seek a supersolution X to the equation −L dX = 1 d that is bounded when d converges to 0. We first have, using the entropy function H 2 (x 1 , Therefore, since we have we obtain, for d small enough to have (1 + 2κ)d < 1, ). Using the estimate we have that if d is small enough depending on κ then −L d (H 2 + D) ≥ 1 2d . Using the comparison principle we then have that and we conclude by letting d converge to 0 since τ converges towards H(x 1 ). Miscellaneous results and comments 5.1. Discussion and prospects for ecological issues. To begin with, we have introduced a split-step model that balances between the local reproduction of species and the exchange process between patches. This split-step model at a mesoscopic scale converges towards a diffusion model whose drifts terms come from the exchanges. This has been also observed for instance in [19]. Here we deal with a neutral metacommunity model with no exchange with an external pool. Hence the dynamics converge to a fixation on a single species for large times. The average time to extinction of species is therefore an indicator of biodiversity. Here for our simple neutral model, Theorem 4.3 provides a strong reckon that the exchange process is good for the biodiversity. In some sense, the presence of two patches allows each species to establish itself during a larger time lapse. In a forthcoming work we plan to numerically study a similar model but with more than two patches and several species. We plan also to calibrate this model with data measured in the south part of Hauts-de-France. The main interest is to assess the role of ecological corridors to maintain biodiversity in an area. The question of the benefit of maintaining hedges arises when the agricultural world works for their removal to enlarge the cultivable plots. This is one of the issue addresses by the Green and Blue Frame in Hauts-de-France. Convergence un distribution. We address here the convergence in law/in distribution of the infinite dimensional processes related to the x t N . This is related to the convergence of the process towards the solution of a stochastic differential equations; we will not develop this here. Following [18] or [13], it is sufficient to check the tightness of the process and the convergence of the finite m-dimensional law. Dealing with x t N instead of x t N , the second point is easy. Indeed, Theorem 3.1 implies the convergence of the m-dimensional law for m = 1. We can extend the result for arbitrary m by induction using the Markov property. For the tightness, we use the so-called Kolmogorov criterion that is valid for continuous in time processes (see [18] chapter 2 and [13] chapter 14); this criterion reads in our case This is a consequence of the following discrete estimate, since x t N is piecewise linear with respect to t, Proposition 5.1. There exists a constant C such that for any m < n (37) E(|x n − x m | 4 ) ≤ C |n − m| 2 N 2 . Proof First step: using that x n is close to a true martingale. Let us set A = Id − κ N M = Id − B. Introduce z 0 = x 0 and z n = x n + B k<n x k . Then since E(x n+1 |x n ) = x n − Bx n , we have that z n is a martingale. Moreover we have the estimate, for 0 ≤ m < n Second step: computing the fourth moment. To begin with we observe that, due to (38) Therefore we just have to prove that (37) is valid with z n replacing x n . We introduce the increment y j = z j+1 − z j . We then expand as follows, setting |.| and (., .) respectively for the euclidian norm and the scalar product in R 2 . Since y l is independent of the past, if for instance l > max(i, j, k) then E((y i , y j )(y k , y l )) = 0. Therefore, (40) reads also Third step: handling D 4 and D 3 . The key estimate reads as follows Fourth step: handling D 1 and D 2 . Using the conditional expectation we have Due to (42) and Cauchy-Schwarz inequality E(|y k | 2 |x k )) = O(N −1 ) and it follows We now handle D 2 exactly as we did for D 1 . This completes the proof.
6,165.6
2020-04-03T00:00:00.000
[ "Environmental Science", "Mathematics", "Biology" ]
Arctic Ocean sea ice cover during the penultimate glacial and the last interglacial Coinciding with global warming, Arctic sea ice has rapidly decreased during the last four decades and climate scenarios suggest that sea ice may completely disappear during summer within the next about 50–100 years. Here we produce Arctic sea ice biomarker proxy records for the penultimate glacial (Marine Isotope Stage 6) and the subsequent last interglacial (Marine Isotope Stage 5e). The latter is a time interval when the high latitudes were significantly warmer than today. We document that even under such warmer climate conditions, sea ice existed in the central Arctic Ocean during summer, whereas sea ice was significantly reduced along the Barents Sea continental margin influenced by Atlantic Water inflow. Our proxy reconstruction of the last interglacial sea ice cover is supported by climate simulations, although some proxy data/model inconsistencies still exist. During late Marine Isotope Stage 6, polynya-type conditions occurred off the major ice sheets along the northern Barents and East Siberian continental margins, contradicting a giant Marine Isotope Stage 6 ice shelf that covered the entire Arctic Ocean. Coinciding with global warming, Arctic sea ice has rapidly decreased during the last four decades. Here, using biomarker records, the authors show that permanent sea ice was still present in the central Arctic Ocean during the last interglacial, when high latitudes were warmer than present. S ea ice with its strong seasonal and interannual variability (Fig. 1) is a very critical component of the Arctic system that responds sensitively to changes in atmospheric circulation, incoming radiation, atmospheric and oceanic heat fluxes, as well as the hydrological cycle 1,2 . Ice significantly reduces the heat flux between ocean and atmosphere; through its high albedo it has a strong influence on the radiation budget of the entire Arctic. Furthermore, the sea-ice cover strongly affects biological productivity, as a more closed sea-ice cover reduces primary production due to low light influx in the surface waters. As sea ice is sensitive to atmospheric and oceanic variability, and because it is involved in several key climate feedbacks (ice-albedo feedback, cloud-radiation feedback, etc.), sea ice plays a substantial role in the global climate system variability, known as polar amplification 3 . Over the past three to four decades, coincident with global warming and atmospheric CO 2 increase, Arctic sea ice has significantly decreased in its extent (cf., Fig. 1) as well as in thickness 2, 4-6 . The loss of sea ice results in a distinct decrease in albedo, causing further warming of ocean surface waters. When extrapolating this trend, the central Arctic Ocean might become ice-free during summers within about the next five decades or even sooner 2,7 . Based on a proxy reconstruction, ice-free summers also occurred during a late Miocene warm climate with simulated atmospheric CO 2 concentrations of 450 ppm 8 , a value we also might reach in the near future. Furthermore, the recent decrease in sea ice seems to be more rapid than predicted by climate models 4,5 , indicating that the processes causing these recent rapid climate changes are not fully understood and subject of intense scientific and societal debate. In this context, a key aspect is to distinguish and quantify more precisely natural and anthropogenic greenhouse gas forcing of global climate change and related sea ice decrease 2 . The last time that Arctic temperatures were significantly higher than today was the Early Holocene Thermal Maximum 9,10 . The Holocene, however, is an interglacial cycle not concluded yet. This certainly justifies climatic evaluations of older, concluded warm interglacial cycles such as the last interglacial (LIG), i.e., Marine Isotope Stage (MIS) 5e (Eemian), lasting from about 130 to 115 ka and often proposed as a possible analog for our nearfuture climatic conditions on Earth 11,12 . Based on proxy records from ice, terrestrial and marine archives, the LIG is characterized by an atmospheric CO 2 concentration of about 290 ppm, i.e., similar to the pre-industrial (PI) value 13 , mean air temperatures in Northeast Siberia that were about 9°C higher than today 14 , air temperatures above the Greenland NEEM ice core site of about 8 ± 4°C above the mean of the past millennium 15 , North Atlantic sea-surface temperatures of about 2°C higher than the modern (PI) temperatures 12,16 , and a global sea level 5-9 m above the present sea level 17 . In the Nordic Seas, on the other hand, the Eemian might have been cooler than the Holocene due to a reduction in the northward flow of Atlantic surface water towards Fram Strait and the Arctic Ocean, indicating the complexity of the interglacial climate system and its evolution in the northern high latitudes 12,18,19 . If climate models are able to reproduce past warm climatic conditions (such as those of the LIG), including the extent of Arctic sea ice cover, we will have additional confidence in their representation of Arctic processes and their projections for the future [20][21][22][23] . In order to test and approve climate models for simulation and prediction of Arctic climate and sea ice cover 8,[20][21][22][23][24][25][26][27][28] , however, precise (semi-quantitative) proxy records about past sea ice concentrations are needed. Such records may be obtained using a quite recently developed biomarker approach that is based on the determination of a highly branched isoprenoid (HBI) with 25 carbons (C 25 HBI monoene) 29 . This biomarker is only biosynthesized by specific diatoms living in the Arctic sea ice and thus named 'IP 25 ' (= ice proxy with 25 carbons) 30 . That means, the presence of IP 25 in the sediments is a direct proof for the presence of past Arctic sea ice. Meanwhile, this biomarker approach has been used successfully in many studies dealing with the reconstruction of Arctic sea ice history during the last glacial to Holocene time interval, i.e., the last about 30 ka [31][32][33][34][35][36][37] . Furthermore, this biomarker seems to be quite stable as it was found in sediments as old as the late Miocene 8 . By combination of this sea ice proxy IP 25 with (biomarker) proxies for open-water phytoplankton productivity such as brassicasterol, dinosterol or a specific tri-unsaturated HBI (HBI-III) [37][38][39][40][41] , a more precise (semi-quantitative) reconstruction of present and past Arctic Ocean sea ice conditions from marine sediments are now available (Supplementary Fig. 1; see Metho 6ds for some more details). For older glacial and interglacial intervals such as MIS 6 and MIS 5, however, no such biomarker data of the central Arctic Ocean sea ice cover are available so far. For these time intervals, reconstructions of past sea ice conditions are mainly restricted to continental margin sites and, even more important, only based on indirect proxies such as, for example, foraminifera, dinoflagellates, and ostracodes [42][43][44][45][46][47][48] . Here we produce these biomarker proxy records of sea ice distribution in the central Arctic Ocean for the time interval of late MIS 6-MIS 5. Including open-water phytoplankton biomarkers as well as micropaleontological data, we demonstrate (1) that a permanent sea ice cover existed during MIS 6 and (2) that during the LIG sea ice was still present in the central Arctic Ocean during the spring/summer season even under (global) boundary conditions significantly warmer than the present. Seasonal open-water conditions, on the other hand, occurred along the Barents Sea continental margin during the interstadials of MIS 5 (with minimum values during MIS 5e/Eemian) but also during the preceding glacial MIS 6. The latter finding-although still based on a low-resolution record-appears to contradict the hypothesis of a thick ice shelf covering the entire Arctic Ocean during MIS 6 as proposed by Jakobsson et al. 49 . Our proxy records are compared with climate model simulations using a coupled atmosphere-ocean general circulation model. Results Glacial to LIG Arctic Ocean sea ice cover. In order to reconstruct the sea ice history of the Arctic Ocean during glacial (MIS 6) to LIG (MIS 5) conditions, we determined the sea ice biomarker proxy IP 25 , open-water phytoplankton biomarkers, and terrestrial biomarkers from four selected sediment cores (see Supplementary Table 1 for exact core locations and water depths, Supplementary Tables 2-5 for the biomarker data). These cores were recovered from areas characterized by different sea ice conditions today, ranging from perennial sea ice in the central Arctic Ocean to seasonal sea ice conditions along the Barents Sea continental margin ( Fig. 1; see Methods for more details). Both IP 25 as well as brassicasterol concentrations are zero or close to zero throughout the time interval from MIS 6 to MIS 5 in the two high-Arctic cores PS2200-5 and PS51/038-3 ( Fig. 2a, b). In all studied samples from both cores also no HBI-III was found (Supplementary Tables 2 and 3). These data strongly point to predominantly perennial sea ice cover during the glacial and LIG (Fig. 3b), preventing algal production during the spring and summer. Based on the biomarker data, the MIS 6/MIS 5 sea ice conditions in the central Arctic Ocean were probably similar to those reconstructed for the Last Glacial Maximum (LGM) and MIS 1/Holocene (Fig. 3a, Supplementary Fig. 6) 36,40 . The biomarker records of the MIS 6/MIS 5 interval at Core PS2757-8 display a surprising distribution pattern (Fig. 2c). Maximum concentrations of sea ice, open-water phytoplankton and terrestrial plant biomarker proxies, i.e., IP 25 , brassicasterol and HBI-III, as well as ß-sitosterol, respectively, are recorded during late MIS 6, followed by a sharp drop during Termination II at the end of MIS 6. During MIS 5 including the LIG, on the other hand, all biomarker proxies display minimum values of zero or close to zero (except for one IP 25 peak near the base of MIS 5). Based on these data and looking at the position of the data points within the IP 25 vs. brassicasterol diagram (Fig. 3b), an extended sea ice cover with occasional ice edge/polynya conditions likely prevailed in the southern Lomonosov Ridge area close to the Siberian continental margin during the late MIS 6. Under such conditions, ice melting and related nutrient and sediment release may have resulted in high fluxes of ice algae, open-water phytoplankton and terrigenous matter ( Supplementary Fig. 1). In contrast, a more or less closed sea ice cover probably existed during MIS 5. Whereas at the three sites from the central Arctic Ocean an extended to perennial sea ice cover was probably predominant, sea ice conditions were much more variable along the northern Barents Sea continental margin as reflected in the data of Core PS2138-2 ( Fig. 2d). High concentrations of both the sea ice proxy IP 25 and the phytoplankton biomarker brassicasterol were measured in samples representing the late MIS 6 and the interstadials MIS 5e, 5c and 5a. For the latter interstadials, also increased HBI-III concentrations were determined (see discussion below). Based on these data, an extended but variable sea ice cover with closed sea ice to ice-edge conditions occurred during late MIS 6 ( Fig. 3b, Supplementary Fig. 1). During the MIS 5 interstadials, a seasonal sea ice cover and ice-edge conditions seem to have been most prominent, with minimum sea ice concentrations towards almost ice-free summers during MIS 5e (Eemian) (Fig. 3b). During the stadials MIS 5d and 5b, on the other hand, sea ice and phytoplankton biomarkers display zero to almost zero concentrations, indicative of a more closed sea ice cover (Figs. 2d and 3b). Last interglacial Arctic Ocean sea ice cover simulations. In addition to our proxy records, we performed transient integrations as well as time slice experiments for the MIS 5 (see Methods). The model experiments were driven by orbitallyinduced insolation and greenhouse gas concentrations in the atmosphere (Supplementary Table 6). Figure 4 shows the time slices for 130, 125, and 120 ka as well as the PI conditions. Our simulation efforts reveal similar boreal spring (March) sea ice extent in the LIG time slices as in the PI simulation. While sea ice concentrations were slightly lower in June during the early LIG (130 ka) and the middle LIG (125 ka) compared to PI concentrations, the largest difference can be observed in September when sea ice concentrations during the early and middle LIG were distinctly lower then those modeled for the PI. During the late LIG (120 ka), on the other hand, September sea ice concentrations seems to have been quite similar to the PI (Fig. 4). Discussion The Quaternary glacial history of the Arctic Ocean is characterized by the repeated build-up and decay of circum-Arctic ice sheets on the continental shelves, the development and disintegration of ice shelves, and related changes in oceancirculation patterns and sea ice cover [50][51][52][53][54][55] . There is, however, still an ongoing and partly controversial debate about the timing and extent of maximum glaciations. A comprehensive circumpolar overview of glacial landforms, stratigraphies, and chronologies and their interpretation in terms of glacial history, is given by Jakobsson et al. 54 , summarizing the current state of knowledge and identifying key questions arising from this synthesis. Based on new evidence of ice-shelf groundings on bathymetric highs in the central Arctic Ocean, Jakobsson et al. 49 most recently proposed an extended thick ice shelf covering the entire central Arctic Ocean (Fig. 5b) and dated it to MIS 6 (~140 ka). This hypothesis would be in line with the biomarker data from the central Arctic Ocean sites PS2200-5 and PS51/038-3 pointing to a more closed and thick ice cover that has prevented both phytoplankton as well as sea ice algae production (Figs. 2a, b, 3b). The near absence of planktic foraminifers in the MIS 6 sediments of these cores (Supplementary Figs. 2 and 3) 56 also supports the interpretation of virtually no surface water productivity due to closed sea ice conditions. From these data, however, it would not be possible to distinguish between a closed, several meters thick sea ice cover and an extended ice shelf of several hundred of Fig. 2 Proxy records for sea ice and surface-water productivity at the four studied sites for the time interval MIS 6-MIS 5. a Core PS2200-5, b Core PS51/38-3, c Core PS2757-8, and d Core PS2138-2. Concentrations (in µg/gTOC) of biomarkers IP 25 (blue circles), brassicasterol (green circles) and ß-sitosterol (orange squares) are plotted vs. core depth in centimetres below seafloor (cmbsf). For Core PS2757-8, concentrations (in ng/g sediment) of the HBI-III are also shown (yellow triangles). Interglacial MIS 5 (and 7) and glacial MIS 6 (and 4) are highlighted by beige and light blue background color, respectively. PI = perennial sea ice cover; EI = extended sea ice; EI-Po = extended sea ice and polynya situation. Numbers (0.2, 0.2-0.6, and 1) on top of the IP 25 records of cores PS2757-8 and PS2138-2 indicate mean PIP 25 index values. For Core PS2200-5, relative abundance of ostracode species Acetabulastoma arcticum, indicative for perennial sea ice cover in the central Arctic Ocean 46 , are shown (cf., Supplementary Fig. 8). For Core PS2757-8, the amount of sand (brown solid line, mainly representing terrigenous material) and a peak event of IRD input (hatched bar at 640 cmbf) are added (data from ref. 77 ). The circled numbers 1-4 indicate different stages of sea ice and ice sheet extent presented in Fig. 6 meters in thickness. The data from cores PS2757-8 and PS2138-2 (Figs. 2c, d, 3b), on the other hand, do not support the hypothesis of such a giant late MIS 6 ice shelf. Our biomarker proxy records indicate at least occasionally open-water conditions, i.e., an ice edge situation, that allowed phytoplankton and ice algae production as well as increased flux of terrigenous matter ( Fig. 5a; cf., Supplementary Fig. 1). For the Barents Sea continental margin (i.e., the location of Core PS2138-2), the MIS 6 situation might have been similar to that of the LGM 57, 58 . These authors postulated an extended Barents Sea Ice Sheet, the western part of the huge Eurasian Ice Sheet 51,55 , that had reached the shelf edge causing polynya-like open-water conditions (triggered by strong katabatic winds) with phytoplankton and sea ice algae production, subglacial meltwater outflow and the deposition of suspended material on the slope at site PS2138-2. Furthermore, the seasonal open-water conditions along the Barents Sea continental margin might have been fostered by the inflow of Atlantic Water (Fig. 5a) that was probably significantly reduced but still penetrated continuously to at least the Franz Victoria Trough west of Franz Josef Land (see Fig. 1 for location) during the last 150 ka 58 . Further towards the east, i.e., off Severnaya Zemlya, no clear signals for open-water conditions have been found in the sedimentary record of Core PS2741-1 (Fig. 5a) 58 . But how can the open-water conditions at Core PS2757-8, i.e., the southern Lomonosov Ridge close to the East Siberian continental margin not influenced by Atlantic Water during MIS 6, be explained? One probable explanation could be that an extended East Siberian Chukchi Ice Sheet (ESCIS) as proposed by Niessen et al. 53 has existed at this time ( Fig. 5a). Evidence for glacial landforms based on hydro-acoustic data from the East Siberian continental margin remain undated 53 , but were most recently supported by numerical modeling to be related to an ESCIS that has formed during MIS 6 59 . Such an extended ice sheet associated with strong katabatic winds should have caused polynya-like open-water conditions in front of the ice sheet ( Fig. 5a), resulting in increased fluxes of phytoplankton, ice algae and terrigenous matter as observed in the PS2757-8 record ( Fig. 2c and Fig. 6, Scenario 2), i.e., a situation similar to that proposed for the Barents Sea continental margin ( Fig. 5a) 57,58 . Such a scenario would clearly contradict the hypothesis of a MIS 6 ice shelf covering the entire Arctic Ocean (Fig. 5b) 49 . On the other hand, Core PS2757-8 is close to the area where we discovered SE-NW oriented streamlined landforms over distances of >100 km on top of the southern Lomonosov Ridge at water depths between 800 and 1000 m during the Polarstern Expedition PS87, interpreted as glacial lineations that were formed by coherent masses of grounded ice flowing across the ridge in a NW direction ( Supplementary Fig. 7d) 8 . Based on sub-bottom profiling and the age model of Core PS87/086-3, a MIS 6 age of the youngest ice-erosional event seems to be most realistic (Supplementary Figs. 4, 7a, 7b). This observation would support the hypothesis of an extended MIS 6 ice shelf at least for this area. One possible but still somewhat speculative explanation for these discrepancies could be a temporal succession of different scenarios as shown in Fig. 6. After a period with maximum extension of the ESCIS covering the southern Lomonosov Ridge (including the area of cores PS2757-8 and PS87/086-3) and causing ice-shelf grounding (ice rise) with no ice algae production underneath (Fig. 6, Scenario 1), the ice shelf started to retreat. Polynya-like conditions caused by strong katabatic winds allowed sea ice algae and phytoplankton production during the late(st) MIS 6 ( Fig. 6, Scenario 2). A marginal ice zone (MIZ)/polynya situation may also be supported by the presence of low but significant concentrations of HBI-III (Fig. 2c). Although the exact sources of this biomarker are not known yet, it seems to be strongly enhanced in MIZ environments as described for the Antarctic 60 as well as the Arctic (Fig. 7f) 37 . During Termination II (Fig. 6, Scenario 3), the ESCIS collapsed close to the shelf edge, and numerous icebergs calved near the grounding line of the remaining ESCIS of which some drifted over the coring location as shown in peak abundances of the terrigenous sand fraction and peak accumulation of ice-rafted debris (IRD) (Fig. 2c) following LIG (MIS 5e/Eemian), the ESCIS disappeared (and with it the katabatic winds) and an extended, more or less closed sea ice cover remained over the southern Lomonosov Ridge in the area of Core PS2757-8, preventing phytoplankton and sea ice algae productivity (Fig. 6, Scenario 4). Towards the flooded East Siberian shelf, it is likely that ice-free conditions existed. The interpretation of the biomarker records of Core PS2757-8 is also further supported by some limited biomarker data from nearby Core PS87/086-3, representing the scenarios 1, 2, and 4 described in Fig. 6 (Supplementary Figs. 4 and 7). As described above for the biomarker proxies of Core PS2200-5 and Core PS51/038-3 (Figs. 2 and 3), a perennial sea ice cover , suggesting similar sea ice conditions during the LIG as during the latest Holocene (present). That means, the perennial sea ice cover must have been interrupted by phases with some restricted open-water conditions during summer that allowed foraminifers to reproduce 56 . The latter is also supported by the presence of calcareous algae (coccolithophoridae) in the Eemian sediments of Core PS2200-5 ( Supplementary Fig. 2) 56 . Furthermore, this interpretation is in line with high abundances of the ostracode species Acetabulastoma arcticum found in these sediments of Core PS2200-5 (Fig. 2a) and Core 96/12-1PC from Lomonosov Ridge (see Fig. 1 for location), proposed to be a proxy for a perennial sea ice cover with >75% sea ice concentrations ( Supplementary Fig. 8) 46 . However, phytoplankton-and ice algae-related organic matter production and flux must have been very low. Thus, it did not (Core PS51/038-3) or almost not (Core PS2200-5) survive zooplankton grazing, transport through the water column and degradation at the seafloor and in the sediment. Peak abundances of the small subpolar planktic foraminifer species Turborotalita quinqueloba found in MIS 5e sediments from the southern Lomonosov Ridge close to the Greenland continental margin (Site GreenICE, Fig. 1), a region with a modern perennial sea ice cover, may indicate less sea ice than today 45 . According to these authors, however, it cannot be determined whether a reduction in sea ice cover was part of a more wide-spread regional pattern or a more restricted phenomenon forced by a polynya-type setting (similar to the modern NorthEast Water Polynya off northeast Greenland). Along the Barents Sea continental margin, the sea ice cover was quite variable throughout the entire MIS 5. This variability in sea ice cover was probably mainly triggered by the variability in Atlantic Water inflow 42,56,58 and (at least in the early-mid MIS 5e) driven by solar insolation (Fig. 7). During stadials MIS 5d and 5b, the inflow of Atlantic water was strongly weakened, indicated by the near absence of the dinoflagellate species Operculodinium centrocarpum (Fig. 7d) 42 and resulting in a strongly extended sea ice cover. Such an extended sea ice cover and reduced primary production are reflected in the near absence to absence of both IP 25 (Fig. 7b) and phytoplankton biomarkers (i.e., brassicasterol and HBI-III) (Fig. 7c) as well as maximum PIP 25 values (Fig. 7a). This interpretation is further supported by the minimum of the total number of dinoflagellate cysts and peak concentrations of the dinoflagellate species Impagidinium pallidum (Fig. 7d), indicative of cold polar conditions and an extensive seasonal sea ice cover 42 . During the interstadials and coinciding with maxima in insolation, on the other hand, sea ice was reduced and surfacewater productivity increased as indicated by minima in PIP 25 and peak values in brassicasterol and HBI-III, respectively (Fig. 7a, c). Especially the latter may may point to increased productivity in connection with a MIZ situation 37,60 . The most prominent sea ice minimum occurred during the LIG (MIS 5e/Eemian) as clearly reflected in the semi-quantitative PIP 25 records of Core PS2138-2. Both P B IP 25 and P III IP 25 (see Methods for further explanation) reach very similar minimum numbers of about 0.2 and less (Fig. 7a), i.e., numbers that may correspond to spring/summer sea-ice concentration of about 20% or even less (cf., 37,38 Fig. 5 Schematic illustration of Arctic sea ice cover and circum-Arctic ice sheets during MIS 6. a Ice sheet configuration 52,54 , including the extended ice sheet on the East Siberian continental margin 53 . Such an extended East Siberian ice sheet/shelf seems to be supported by numerical reconstruction, but should have been connected with the Eurasian Ice Sheet (transparent light gray oval with red question mark 59 ). Strong katabatic winds related to the ice sheets (shown tentatively as stippled black arrows), were probably responsible for ice-free polynya-type conditions off the major ice sheets, causing phytoplankton and sea-ice algae productivity recorded in cores PS2138-3 and PS2757-8 (for the region off the Greenland-Laurentide Ice Sheet no proof from sediment cores are available. Thus, a polynya-type situation is only assumed and marked by question marks). (Fig. 7d) as well as a prominent maximum in Atlantic-Water species of benthic foraminifers 44 . This interval of strongly reduced summer sea ice concentration shows some delay to the summer insolation maximum (Fig. 7b), but is more or less contemporaneous with a peak LIG warmth documented in the Nordic Seas [61][62][63][64] . Unfortunately, our time resolution of one sample/3000 years is not high enough to distinguish between the early, middle and late LIG conditions as it is possible for the data sets from the Nordic Seas. There, planktic δ 18 O records from cores MD95-2010 and MD99-2304 (for core locations see Fig. 7e) document a climatic optimum in the early-middle part of the LIG between about 126 and 116 ka, related to a strong poleward extension of warm Atlantic Water 61, 62, 64 . These conditions are quite similar to those also described for the Early Holocene at cores MSM5/5-712-2 and NP05-11-70GC (Fig. 7a, g; see Fig. 7e for core locations), i.e., very low PIP 25 values of 0.2 and less, interpreted as almost ice-free conditions triggered by increased Atlantic Water inflow 37,38 . Towards the end of the LIG, i.e., during the early stage of the last glacial inception after 115 ka and coinciding with a minimum of the northern summer insolation, sea ice along the Barents Sea continental margin started to extend by a factor of 2-3 in comparison to the mid-LIG/Eemian minimum (Fig. 7a, b). Contemporaneously, the central Arctic Ocean sea ice concentrations probably increased as well (cf., Fig. 2a). Based on simulation experiments, a related increase in Arctic freshwater export by sea ice may have induced a weakening in ocean heat transport by the subpolar gyre in the North Atlantic 20 . Whereas most proxy-based reconstructions point to an earlymiddle LIG climatic optimum with reduced summer sea ice concentrations between 126 and 116 ka, the results of our model simulations only support a pronounced reduction in summer sea ice concentration for the LIG-125 and LIG-130 runs (in both time slice as well as transient runs; Figs. 8 and 9), but also indicate that sea ice was still present in the central Arctic Ocean even under climatic conditions significantly warmer than today (Fig. 4). The presence of summer sea ice in the central Arctic Ocean-despite the elevated air temperatures 14, 15, 64 -may have resulted from a reduced total oceanic heat flux towards the north, as suggested from (compared to the PI control runs) reduced AMOC patterns during LIG-130 and LIG-125 (Fig. 4). In the central Arctic Ocean, simulated LIG-130 and LIG-125 sea ice concentrations decreased to about 65-75% at sites PS51/038-3 and PS2757-8 (Figs. 8b, c) and to about 50-60% at site PS2200-2 ( Fig. 8a). At the Barents Sea continental margin (i.e., at site PS2138-2) strongly influenced by Atlantic Water inflow, minimum summer sea ice concentrations of about 25% were simulated for the 125 ka time slice (Fig. 8d). This minimum value fits almost exactly with the PIP 25 reconstruction at Core PS2138-2 (Fig. 7a). For the LIG-120 interval, we record an apparent mismatch between the LIG-120 simulation (suggesting sea ice conditions similar to those of the PI conditions) (Figs. 4 and 8) and theproxy-based sea ice record (suggesting minimum sea ice concentrations similar to the early-mid-LIG (Fig. 7a). A similar mismatch between LIG-120 mean annual surface temperature (MAT) simulation and proxy data is also described by Otto-Bliesner et al. 21 . The simulated MAT is very similar to the PI, whereas the LIG-130 and LIG-125 MAT simulations resulted in significantly warmer temperatures. In our study, this mismatch might be explained by the results of the transient simulations of the LIG from 130 ka to 115 ka, indicating the occasional occurrence of almost ice-free years (probably caused by internal variability) near 120 ka (Fig. 9). During these exceptional years, significantly increased algal productivity may have caused the biomarker signal to be preserved in the sediments. However, this phenomenon cannot explain the mismatch to the other published proxy records clearly indicating warm climatic conditions in the High Northern Latitudes at that time (see discussion above). Thus, there appears to be a need for model Fig. 6 Cartoon of different paleoenvironmental scenarios of sea ice and ice sheet extent at the East Siberian Continental Margin/Southern Lomonosov Ridge area. These scenarios are (1) to (4) based on the biomarker records from Core PS2757-8 (Fig. 2c). (1) MIS 6 scenario with an extended East Siberian Ice Sheet and ice shelf/rise. Under such conditions, neither phytoplankton productivity nor terrigenous organic matter (OM) input occur as reflected in the absence of the biomarkers, and total sedimentation may have decreased to zero. Hatched field marks area on southern Lomonosov Ridge where traces of erosion by grounding ice were recorded as SE-NW oriented streamlined bedforms on the Lomonosov Ridge at water depths between 800 and 1000 m (ref. 8 ; cf., Supplementary Fig. 7). (2) Late MIS 6 with the ice sheet still reaching the shelf edge, with a polynya situation caused by strong katabatic winds. Quite stable ice-edge conditions resulted in increased fluxes of IP 25 , open-water phytoplankton and terrigenous biomarkers (cf., Supplementary Fig. 1). (3) Phase of major retreat and decay of the ice sheet resulting in high sediment (IRD) input by calving icebergs. (4) Last Interglacial (MIS 5e/Eemian) with a more or less closed sea ice cover situation over Core PS2757-8, preventing phytoplankton and sea ice algae productivity, and probably ice-free conditions towards the East Siberian shelf. The occurrence of phytoplankton, sea ice algae and terrigenous biomarkers are indicated by green stars, yellow stars and orange rhombs, respectively. Red circle indicates approximate location of Core PS2757-8 improvement, which could be solved with further data-modeling comparison. Pfeiffer and Lohmann 28 indicate in model experiments that the height of the Greenland Ice Sheet also affects the sea ice cover in the Arctic Ocean. That means, a reduced Greenland Ice Sheet would result in a significantly reduced sea ice cover as shown in our LIG-130 simulation (Fig. 8): at the Barents Sea continental margin, sea ice concentrations may have reached values as low as 10% (Site PS2138-2), in the central Arctic Ocean sea ice may have decreased to concentrations of 30-50% (Sites PS51/38-3 and PS2200-2). These very low sea ice concentration values, however, are not supported by our proxy records (cf., Figs. 3 and 7), suggesting that the Greenland Ice Sheet has probably not strongly deviated from its present hight. Based on ice-volume-equivalent sea level reconstructions, the Bering Strait was probably already re-opened during all the three LIG intervals and most parts of the shallow Siberian marginal seas were already flooded even during LIG-130 65,66 . As the exact reconstruction of global sea level rise during Termination 2 is still under debate, we also simulate a LIG-130 scenario with a closed Bering Strait and only half-flooded Siberian shelf seas (Supplementary Fig. 9). Whereas the simulations for March and June are all quite similar, the September sea ice concentration of the central Arctic Ocean is significantly lower under conditions with a closed Bering Strait and half-flooded shelf seas (Fig. 8, Supplementary Fig. 9). That means, the simulations with an already re-opened Bering Strait and mostly flooded shelf seas are much more similar to our proxy reconstruction, and thus seem to represent the more realistic scenarios. Finally, we have compared the Arctic sea ice conditions of the LIG and simulated future climate projections for 2100 and 2300, based on two different IPCC scenarios 2 , the RCP4.5 (583 ppm CO 2eq ) and the RCP6 (808 ppm CO 2eq ) (Fig. 8). Both scenarios show a severe reduction in sea ice coverage in the late summer, i.e., summer sea ice concentrations are significantly lower than those of the LIG. With increasing atmospheric CO 2 , however, the reduction of sea ice in the central Arctic Ocean is more rapid and disproportionately high in comparison to its margin. Whereas the mid-LIG summer sea ice concentrations were still around 60 to 75% in the central Arctic Ocean, but only around 20% or less along the Atlantic-Water influenced Barents Sea continental margin, nearly ice-free conditions might be reached in the entire Arctic Ocean in 2300. The number of ice-free summer months is increasing with higher atmospheric CO 2 . Under these high CO 2 concentrations, the winter sea ice may start to melt as well (Fig. 8). Furthermore, the higher obliquity during the LIG (Supplementary Table 6) may suggest an insolation forcing during the LIG, whereas for the climate scenarios RCP4.5 and RCP6 the additional heat fluxes are induced by increased greenhouse gas concentrations in the atmosphere. In conclusion, we are aware that our low-resolution proxy study of the Arctic Ocean sea ice cover during MIS 6 and MIS 5 is only a first but important step. Furthermore, our results also display some model-proxy data inconsistencies (cf., 21 ). Nevertheless, our findings already provide important groundtruthing data of sea ice conditions during the penultimate glacial and the LIG/Eemian of the central Arctic Ocean and the Siberian-Barents Sea continental margin, that may help to further test and improve models for simulation and prediction of future climate change. In follow-up research, such a study should be extended to a high-resolution approach to be carried out on thicker, well-dated MIS 6/MIS 5 sequences recovered from key areas of the Arctic Ocean and adjacent marginal seas. Such sedimentary sequences with the essential spatial and temporal resolution, however, are not available yet and have to be cored in the future. These new sea ice proxy records are needed (1) to fully prove the scenarios of a succession from an extended ice shelf to polynya/open-water conditions (cf., Fig. 6), (2) to reconstruct in more detail the changes in sea ice cover for early, middle and late LIG intervals characterized by very different external forcings and related internal feedback mechanisms, and (3) to allow a more fundamental proxy data/modeling comparison that results in model improvements and better reproduction of the LIG climatic evolution and prediction of future climatic scenarios [20][21][22][23]64 . Methods Studied sediment cores and stratigraphic framework. Exact locations and water depths of the four studied sediment cores as well as those discussed in the text are listed in Supplementary Table 1. Two of these cores, Core PS2200-5 and Core PS51/038-3, are located in the central Arctic Ocean characterized by a perennial sea ice cover today ( Fig. 1; 8-10/10 summer sea ice concentration). Core PS2757-8 is located on the southern Lomonosov Ridge close to the Laptev Sea continental margin, an area that is predominantly covered by sea ice ( Fig. 1; 7/10 summer sea ice concentration) but may occasionally be even ice-free during summer. The fourth core, Core PS2138-2, is located at the Barents Sea continental margin, an area with a seasonal sea ice cover and a strong influence of warm Atlantic Water inflow today ( Fig. 1; ca. 4/10 summer sea ice concentration). The stratigraphic framework and related age models of the four sediment cores used in this study, are based on oxygen isotope stratigraphy, 10 Be stratigraphy, paleomagnetostratigraphy, biostratigraphy, lithostratigraphy, and/or magnetic susceptibility records . From these data we are confident that the chronology of our records is reliable and accurate enough as a framework for our MIS 6/MIS 5 sea ice reconstruction. For the quantification of IP 25 and HBI-III their molecular ions (m/z 350 and m/z 346, respectively) in relation to the abundant fragment ion m/z 266 of the internal standard (7-HND) were used (selected ion monitoring, SIM mode). The different responses of these ions were balanced by an external calibration 33 . The Kovats Index calculated for IP 25 is 2086. The detection limit for quantification of IP 25 using the GC-MS system described is 5 ng mL −1 . Brassicasterol and β-sitosterol were quantified as trimethylsilyl ethers using the molecular ions m/z 470 and m/z 486, respectively, in relation to the molecular ion m/z 464 of cholesterol-D 6 . When using this biomarker proxy for sea ice reconstructions, however, one should have in mind that IP 25 is absent under a permanent sea ice cover limiting light penetration and, as a consequence, sea ice algal growth (i.e., IP 25 = 0). As IP 25 is only produced within the sea ice matrix, ice-free conditions also result in zero IP 25 concentrations. Thus, the two extremes, ice-free vs. thick closed sea ice cover, cannot be distinguished solely by using the IP 25 biomarker. By considering also a phytoplankton biomarker indicative for open-water primary production, these extremes can be easily separated as under a permanent sea ice cover the phytoplankton biomarker is absent but reaches maximum concentrations under open-water conditions (Fig. 3, Supplementary Fig. 1) 31,38 . For more semiquantitative estimates of present and past sea-ice coverage, Müller et al. 38 combined the sea-ice proxy IP 25 and a phytoplankton biomarker and calculated a phytoplankton-IP 25 index, the so-called ʽPIP 25 index': with the balance factor c = mean IP 25 concentration/mean phytoplankton biomarker concentration for a specific data set or core. As open-water phytoplankton biomarkers brassicasterol and dinosterol were used in this approach (see also ref. 39 . for further references and critical discussion of these sterols as organic source indicators). The balance factor c is needed due to the significant concentration difference between IP 25 and brassicasterol (or dinosterol). For the calculation of the mean concentration values of a specific core (interval) zero concentrations should be excluded. The coupling of IP 25 with phytoplankton biomarkers such as brassicasterol or dinosterol proves to be a viable approach to determine (spring/summer) sea ice conditions as is demonstrated by the good alignment of the PIP 25 -based estimate of the recent sea ice coverage with satellite observations 38 . More recently, Smik et al. 41 introduced the HBI-III alkene as a phytoplankton biomarker replacing the sterols in the PIP 25 calculation. This modified PIP 25 approach is far less dependent on the balance factor c and based on biomarkers from the same group of compounds (i.e., HBIs) with more similar diagenetic sensitivity. In a study of sediment cores from the western Barents Sea and the northern Norwegian Sea, Belt et al. 37 have calculated PIP 25 values using brassicasterol (ʽP B IP 25 ') and HBI-III (ʽP III IP 25 ') as a phytoplankton biomarker, respectively. Importantly, these authors could demonstrate that both approaches yielded similar outcomes if the core-specific balance factors were used (Fig. 7g), a fact also supported by our own data here. In this study, we have followed the P B IP 25 as well as P III IP 25 approaches for sediments from Core PS2138-2, using balance factors of c = 0.016 for P B IP 25 and both c = 1 and c = 0.32 for P III IP 25 ( Fig. 7a; see Supplementary Table 5). Model simulation. The simulations of the Arctic sea ice condition during the LIG are performed with the climate model COSMOS, which consists of the atmosphere model ECHAM5, the ocean/sea-ice model MPI-OM, and the land surface model JSBACH 70 . ECHAM5 and JSBACH both run on a resolution of T31, corresponding to ∼3.75˚× 3.75˚laterally, and the atmosphere is divided in 19 unevenly spaced layers. The ocean model offers a spatial resolution of 1.5˚× 3.5˚and is vertically divided into 40 unevenly spaced layers, with model poles placed over Greenland and Antarctica. Sea ice formation and dynamics are simulated in the ocean model. We simulate four distinct periods, three from the LIG corresponding to LIG-130 (130 ka), LIG-125 (125 ka), and LIG-120 (120 ka) with greenhouse gas and orbital values derived from ice core records [71][72][73] and astronomical calculations 74 ; see Supplementary Table 6 for model forcings. As a fourth control period, we simulate the PI conditions. In addition, we also have run a scenario with a closed Bering Strait and half-flooded shelf seas ( Supplementary Fig. 9). All of these model experiments were spun up for 2000 years to equilibrate to the changed boundary conditions. Following the equilibration period, we simulate an additional 100 years to be used for evaluation. The simulation with reduced Greenland ice sheet coverage is discussed in more detail in Pfeiffer and Lohmann 28 . Furthermore, we simulate the evolution of the LIG transiently, varying both greenhouse gas values and orbital values as the simulation progresses. This simulation is accelerated by a factor of 10, meaning that every simulated year accounts for 10 years of actual time, and the forcing is correspondingly also accelerated. LIG-130, LIG-125, and LIG-120 are compared to the periods in our transient simulation, and the resulting 100-year means over these time slices are nearly identical to the time slice experiments. The future scenario integrations follow the protocol in the IPCC Representative Concentration Pathway (RCP) scenarios RCP4.5 (583 ppm CO 2eq ) and RCP6 (808 ppm CO 2eq ) 2, 27 . Data availability. All data generated or analyzed within this study are included in this published article (and its supplementary information files) and available at https://doi.org/10.1594/PANGAEA.874357.
9,676.8
2017-08-29T00:00:00.000
[ "Environmental Science", "Geology" ]
Design method for VDCC-based analog comb filter for power line interference cancellation An analog comb filter is implemented by linking multiple VDCC-based notch filters in a cascading fashion (N in total), eliminating N different pole frequencies. This study focuses on suppressing a fundamental frequency of power-line interference of 50 Hz and its consecutive three odd harmonics at 150 Hz, 250 Hz, and 350 Hz. One significant advantage of this comb filter is the independent control over filters' parameters like quality factor and pole frequency. Additionally, these filters can be electronically tuned by adjusting the transconductance gain of VDCC. The suggested notch filter configuration involves 2 capacitors, 2 resistors, and 1 VDCC element. Extensive simulations were conducted using PSPICE simulator software to validate the effectiveness of these filters. The basic building block, VDCC, is designed and implemented in the simulation using integrated circuits MAX435 and AD844.• Design uses a VDCC-based high Q notch filter as the active building block.• The filter employs fewer active and passive components.• Simulated results using commercially available ICs, MAX435 and AD844, confirm the filter's practical utility. A valuable solution for achieving this is the utilization of a comb filter.Unlike notch filters, comb filters can attenuate multiple frequencies simultaneously.Analog filters hold a distinct advantage for real-time signal processing compared to digital counterparts.Consequently, numerous analog comb filters have been listed in existing literature [3][4][5][6][7][8][9] to eliminate the primary frequency of PLI and its associated odd harmonics from biological signals like electrocardiogram (ECG), electroencephalogram (EEG), and electromyogram (EMG). The key benefits of the suggested comb filter, which represents the novelty of the circuit by overcoming the literature research gaps, can be summarized as follows: • Minimal Active Building Blocks: The proposed comb filter is notably efficient regarding active components, requiring only 4 VDCCs.Meanwhile, in [3][4][5][6][7][8] , more than four active building blocks are used.• Reduced Passive Components: Besides its frugality with active components, the proposed filter employs fewer passive components than [3][4][5][6][7][8] .• Low MOSFET Count: Compared to alternative approaches in [3][4][5][6][7][8] , the proposed filter demands a mere 48 MOSFETs, contributing to its efficiency and simplicity.• Orthogonal Parameter Relationship: The filter establishes an orthogonal correlation between the pole frequency and the quality factor, enhancing flexibility and adaptability for various applications. The proposed comb filter has only one limitation: the notch depth is lesser than the existing comb filters.Still, it is sufficient to attenuate the power-line interference effectively.Also, a high notch depth may distort the output signal. Due to the inherent properties of current mode circuits, including lower power consumption, wider bandwidth, higher dynamic range, and simpler architecture, several current mode blocks have gained prominence in designing various analog signal processing and generating circuits.These include second-generation current conveyor (CCII) [10] , voltage differencing transconductance amplifier (VDTA) [11] , differential difference current conveyor (DDCC) [12] , voltage differencing gain amplifier (VDGA) [13] , multiple output current differencing transconductance amplifier (MOCDTA) [14] , and voltage differencing current conveyor (VDCC) [15 , 16] .The VDCC is particularly noteworthy for its ability to offer electronically adjustable transconductance gain and facilitate the concurrent current and voltage transfer across its terminals.This inherent versatility renders the VDCC highly conducive to the development of active filters and inductor simulators, among other applications.The VDCC, depicted in Fig. 1 , is an analog building block featuring five terminals, P, N, X, Z, and W. P and N are high-impedance input terminals, Z and W are high-impedance output terminals, and X is low-impedance output terminal.The VDCC comprises an operational transconductance amplifier (OTA) with a transconductance gain represented as "g m , " followed by a second-generation current conveyor (CCII) in cascade mode.The relationships among the various terminals of the VDCC are described in [16] as given in Eq. (1) . The port relationship in matrix form in Eq. ( 1) can be understood as follows.The VDCC operates as follows: The differential input voltage ( V P -V N ) undergoes multiplication by the transconductance gain (g m ) of the VDCC, resulting in a current output, denoted as I Z , which is accessible at output terminal Z .Simultaneously, the voltage V Z is present at terminal X , and it is equivalent to V Z .The current at terminal X is then conveyed to terminal W, meaning I W = I X .Notably, no current flows through the P and N terminals, leaving I P and I N equal to zero because of the high input impedance of the VDCC.An IC-based implementation of VDCC using an OTA IC MAX435 and a CFOA IC AD844, shown in Fig. 2 , is used in this research article.The exact VDCC implementation will be employed for simulation and has been explored for hardware implementation. A single VDCC-based notch filter has been implemented, as presented in Fig. 3 .In this implementation, 2 capacitors and 2 resistors are employed as the passive components.A VDCC is used as an active block in this filter. The transfer function is obtained by standard analysis of Fig. 3 as follows. The second-order notch filter transfer function can be expressed as: By comparing the coefficients in Eqs. ( 2) and (3) , we can derive the values of the pole frequency ( o ) in rad/sec and the quality factor ( Q ) for the notch filter as follows: Observing Eqs. ( 4) and ( 5) , it becomes evident that the pole-frequency and the quality factor can be individually tuned to achieve specific values, allowing for independent control of these crucial parameters.In this design, we have intentionally set the value of R 2 as R 2 = 1/ g m for ease in setting the components' values for various pole frequencies and quality factors.Considering R 2 = 1/ g m , ( 2) , (4) , and ( 5) can be rewritten as: The manipulation of g m , C 1 , and C 2 provides the means to fine-tune the pole frequency to the desired value.In contrast, by employing R 1 , the quality factor can be independently adjusted while keeping gm, C 1 , and C 2 constant, as depicted in Eqs.(7) and (8) .The same notch filter is used in the proposed comb filter, discussed in the next section. A novel analog comb filter using VDCC is discussed here.The filter, which can sharply attenuate more than one frequency, is called a comb filter.Its name is given due to the comb-like magnitude response in the frequency domain.One of the methods to synthesize a comb filter is by connecting N number of notch filters for suppressing N number of pole frequencies.We have chosen four notch filters in the proposed design for suppressing four pole frequencies, one PLI, and its three odd harmonics.The cascading of four notch filters for four different pole frequencies with four different quality factors is presented in Fig. 4 .The notch filters are precisely tuned to target specific frequencies, including the fundamental frequency of power line interference at 50 Hz, as well as the 3 rd , 5 th , and 7 th odd harmonics at 150 Hz, 250 Hz, and 350 Hz. 10, 20, 30, and 40 quality factors have been thoughtfully selected to achieve a sharp and effective notching performance. The VDCC comb filter circuit, which is built upon the VDCC-based approach and incorporates four notch filters, is visually presented in Fig. 5 .This design utilizes 4 Voltage Differencing Current Conveyors (VDCCs), 8 capacitors, and 4 resistors to achieve its functionality.The transconductance gain of all four VDCCs is taken the same as g m because the same VDCC has been used for all the four notch filters.In the proposed comb filter, eight capacitors, C 1 , C 2 , C 3 , C 4 , C 5 , C 6 , C 7 , and C 8 , and four resistors, R 1 , R 2 , R 3 , and R 4 are used.By multiplying the transfer functions of four successively linked notch filters, it is possible to get the transfer function of the suggested comb filter shown in Fig. 5 .(9) gives: The filter's parameters can be expressed in Eqs.(11) and (12) as: Much like the parameters of the notch filter, these comb filter's parameters can be independently tuned to cater to specific requirements. Validation The suggested notch and comb filters of Figs. 3 and 5 are simulated using PSPICE.The analog building block of these filters, VDCC, is implemented using two high-performance ICs, MAX435 and AD844, in the simulation and the hardware implementation.IC MAX435 and IC AD844 are OTA (Operational Transconductance Amplifier) and CFOA (Current Feedback Operational Amplifier) ICs, respectively.These two ICs are cascaded to implement a versatile current-mode building block, VDCC.The simulation setup of the IC-based VDCC implementation is shown in Fig. 2 .This setup is used for the simulation and hardware implementation of the proposed notch and comb filter.For the proper biasing of these ICs, power supplies are taken as V DD = + 10V and V SS = -10 V.The transconductance gain, gm , of the VDCC is set by resistor R 1 of Fig. 2 using a predefined relation between g m and limiting resistor R 1 as g m = 4/ R 1 .From this relation, gm can be calculated as 454.54 μA/V.This g m value has been used for the simulation and hardware implementation of VDCC-based filters.The resistor R 2 used in Fig. 2 is the set resistor of IC MAX435.The value of R 2 is carefully chosen as 4.7 k Ω .The terminals of VDCC, P, N, Z, X, and W, are made as the input-output ports and have been used in the VDCC block.The same block has been used to simulate notch and comb filters, shown in Fig. 3 and Fig. 5 , respectively. First, the notch filter has been designed for a pole frequency of 50Hz and a quality factor of 10 to suppress the 50Hz PLI effectively.For this design, the passive components (capacitors and resistors) used in Fig. 3 are chosen as C 1 = 362 nF, C 2 = 5.8 μF, R 1 = 5.5 k Ω , and R 2 = 2.2 k Ω .As discussed above, the transconductance gain of VDCC, gm , is set to a value of 454.54 μA/V.In Figs. 6 and 7 , you can observe the notch filter's simulated magnitude and phase responses.These responses vividly demonstrate the remarkable efficacy of the proposed notch filter in attenuating the fifty-hertz power line interference with a high degree of precision and sharp notching.In magnitude response, shown in Fig. 6 , a notch depth of -45.7dB is obtained, effectively suppressing the 50Hz PLI.Phase response, shown in Fig. 7 , shows a sharp phase change of 360 0 from 0 0 to -360 0 at the pole frequency. The proposed comb filter, shown in Fig. 5 , has been simulated with the same setup of VDCC.The values of passive components (capacitors and resistors) used in Fig. 5 are given in Table 1 .With these values of capacitors and resistors, the four cascaded notch filters are set for the pole frequency of 50 Hz, 250 Hz, and 350 Hz and quality factors of 10, 20, 30, and 40, respectively.The magnitude response corresponding to these settings is visually depicted in Fig. 8 . A sinusoidal waveform is run through the suggested comb filter to assess its effectiveness in the time domain.The frequency and the amplitude are taken as 50Hz and 100mV, respectively.The 50Hz waveform resembles the 50Hz PLI, whereas 100mV amplitude is sufficient to resemble the amplitude of PLI in low-frequency, low-amplitude biological signals.The input and output waveforms are shown in Fig. 9 .It can be seen that the 50Hz input signal is well suppressed after some settling time, as expected.It verifies in real-time how well the suggested comb filter performs. Table 1 Values of passive components. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2,794.4
2024-02-01T00:00:00.000
[ "Engineering" ]
Numerical continuation applied to internal combustion engine models This paper proposes tools from bifurcation theory, specifically numerical continuation, as a complementary method for efficiently mapping the state-parameter space of an internal combustion engine model. Numerical continuation allows a steady-state engine response to be traced directly through the state-parameter space, under the simultaneous variation of one or more model parameters. By applying this approach to two nonlinear engine models (a physics-based model and a data-driven model), this work determines how input parameters ‘throttle position’ and ‘desired load torque’ affect the engine’s dynamics. Performing a bifurcation analysis allows the model’s parameter space to be divided into regions of different qualitative types of the dynamic behaviour, with the identified bifurcations shown to correspond to key physical properties of the system in the physics-based model: minimum throttle angles required for steady-state operation of the engine are indicated by fold bifurcations; regions containing self-sustaining oscillations are bounded by supercritical Hopf bifurcations. The bifurcation analysis of a data-driven engine model shows how numerical continuation could be used to evaluate the efficacy of data-driven models. Introduction Engineers often make use of computer models to replicate dynamic behaviour. When nonlinearities are known to influence model output, time history simulations can be used to build up a picture of a given model's dynamic behaviour by running multiple simulations for different initial conditions, to identify (for example) which inputs produce a given output. While a simulation of a computation model for one initial condition and one set of parameters can be a relatively short process, the cumulative time needed to simulate the response for many different initial conditions and parameter values means that time history simulations can become computationally inefficient. If the system's steady-state is of interest (such as in traditional engine mapping), transient behaviour needs to be simulated first as the system reaches a steady-state: this adds time to each simulation. The presence and location of any repelling equilibria are also difficult to find through conventional time history simulations alone, because initial conditions that start near repelling equilibria move away from these equilibria over time. Although real systems do not reach repelling equilibria, knowledge of their location allows engineers to identify key regions in the system's state space, as they often separate regions of the state space where similar initial conditions may have quite different steady-state responses. A variety of approaches to complement conventional time history simulations of nonlinear dynamic models have been presented in engine literature: recurrence plots and recurrence quantification analysis have been used to investigate the relation between injection impulse with engine speed 1 and the dynamics associated with cycle-to-cycle variations in a diesel engine 2 ; multilevel substructuring procedures were used to study periodic responses of large engine models in Theodosiou et al. 3 ; 0-1 testing was applied to classify chaotic behaviour within the combustion process. 4 Understanding the steady-state response of engine models is also highlighted as a challenge in the literature: in Beno et al., 5 the authors studied the calibration of a turbocharged diesel engine by representing the highly nonlinear system as a mean value model. All of these approaches serve to enhance the automotive engineer's toolbox of analysis techniques, allowing specific information from a dynamic model to be obtained efficiently. For example, these studies were useful in obtaining useful information about the combustion process in specific cases, revealing that it becomes highly sensitive to initial conditions under certain conditions: reducing engine speed and torque to low values presents chaotic behaviour, while raising speed and torque to high values removes this behaviour. A bifurcation analysis approach is an alternative method of obtaining useful information about a nonlinear system's dynamics. Bifurcation theory considers how the equilibrium solutions of system of ordinary differential equations change as one or more parameter is varied. When the equilibrium solution experiences a qualitative change in behaviour, such as a change in stability, it is said to bifurcate: these points have particular mathematical descriptions, and can be followed under the simultaneous variation of multiple parameters. Bifurcation theory has been previously used to help understand a variety of dynamics problems in the engineering industry, seeing extensive use in aeronautical literature: bifurcation analysis has been shown to be a useful tool in studying aeroelasticity in wings, observing self-sustaining oscillations under the variation of one or more parameters 6,7 ; it has been used in the analysis of aircraft undertaking ground manoeuvres [8][9][10] and also the analysis and recovery of deep stall. 11 The approach obtains results that would be unobtainable with linear analysis alone. The approach has also been used in current literature directly related to the automotive industry, in areas such as vehicle dynamics, wheel and suspension dynamics, permanent synchronous motors and braking systems. A bifurcation analysis helped determine how driving speed and steering angle 12 and torque and steering angle 13 affect the stability in a cornering vehicle. In addition, the lateral dynamics of a vehicle driving straight ahead was also studied. 14 The methods are also shown to be useful in the analysis of wheels, tyres and suspension dynamics, with the damping force shown to dramatically affect the behaviour of the suspension 15 and the slip angle and vehicle speed shown to play major roles in the development of vibration. 14 Shimmy behaviour has also been studied, with limit cycles being observed under bisectional road conditions 16 and bistable parameter ranges were found for towed wheels. 17 Furthermore, the approach was used in determining the stability of permanent synchronous motors 18 and regions of dynamic behaviour in braking dynamics. 19 More generally, the approach has been used to study larger vehicles, including how lane changes effect buses 20 and semi-trailers 21 and finding critical speeds for railway vehicles. 22 Bifurcation analysis is proposed in this work as another complementary approach that can be used to explore the nonlinear dynamic behaviour of engines. To perform a bifurcation analysis on systems (such as engines) where the dynamic equations do not yield analytical expressions for their bifurcations, numerical continuation is the method used to undertake a computational bifurcation study. Starting from a known steady-state response, the numerical continuation algorithm traces out branches of equilibria under the variation of one parameter. When a bifurcation is detected, the continuation algorithm can trace out branches of the same type of bifurcation as two parameters are varied (simultaneously). The branches of equilibria and bifurcations are usually displayed on bifurcation diagrams, which illustrate the long-term behaviour of the nonlinear dynamic system. For further information on bifurcation theory, see Strogatz 23 and Yuznetsov. 24 While previous applications of bifurcation theory are broad, to the authors' knowledge, bifurcation analysis methods have seen limited application to the study of engine dynamics. The authors have previously presented a bifurcation analysis of an open-loop internal combustion engine 25 where two parameters, throttle and torque, have been varied. This paper expands on the previous work by conducting additional twoparameter and new three-parameter continuations, including a more realistic altitude and temperature model and the analysis of a data-driven model. To the authors' knowledge, these are the first examples of papers to conduct a full bifurcation analysis of an open-loop internal combustion engine of both a datadriven and physics-based model. The aim of this paper is to demonstrate how a bifurcation analysis can be used to complement conventional time history simulations in the analysis of internal combustion engine models. The numerical continuation code AUTO, 26 -28 integrated into MATLAB via the dynamical systems toolbox, 29 is used to conduct the bifurcation analysis. The paper begins with an overview of two models used in the paper: one physics-based model and one data-driven model. The physics-based model is studied first, to demonstrate how the approach can be used to provide insight into engine dynamics for different operating points of throttle and desired load torque. The sensitivity of this behaviour to changes in additional parameters is then studied, before comparing the bifurcation results from the physics-based model with results from a data-driven model. This demonstrates the application of numerical continuation to more robust and realistic engine model. Finally, key findings from the analysis of both models are compared and summarized, and recommendations for future work are outlined. Engine models This work uses two different model types: a physicsbased model adapted from Guzzella and Onder 30 and a data-driven model, created from engine test data. These two models are used to demonstrate the applicability of bifurcation analysis methods to the study of engine dynamics, given that engine models may be physicsbased, data-driven or a combination of the two. Physics-based engine model The following assumptions, as described by Guzzella and Onder,30 No pressure drop in the intake runners. The temperature drop from the evaporation of the fuel is cancelled out by the heating effect of the hot intake duct walls. The temperature in the cylinder is thus the same as in the intake manifold. The air/fuel ratio is constant and equal to stoichiometry. Volumetric efficiency depends on manifold pressure The exhaust manifold pressure is constant. Torque generation; The engine may be simplified as a Willans machine The calculation time and the sampling time can be neglected. Engine inertia; The engine has a constant inertia and all internal friction is included (in the input). 8 Engine-speed sensor; The sensor is linear and very fast. The dynamics of the alternator and electronic load may be lumped into one first-order system. 30 The physics-based model is governed by three coupled first-order nonlinear differential equations. The dynamics associated with the intake manifold pressure, P m ; engine speed, v e ; and torque, T l , are represented by equations (1)-(3), respectively 30 Here, R is the gas constant of air, q m is the manifold air temperature,V m is the intake manifold volume, Y e is the engine inertia and t l is a time constant. Two parameters are typically considered as control inputs: u a 2 ½0; 1, the throttle position, and u l , the desired load torque. Equation (1) is the first differential equation and models the dynamics associated with the mass flow entering ðm a Þ and exiting ðm b Þ the manifold. The mass flow entering the manifold is modelled by equation (4) Here, p a is the ambient pressure and q a is the ambient temperature, and a; b; c and d are fitted parameters. To conduct a bifurcation analysis, all equations must be smooth. Therefore, rather than use a piecewise approximation as in Guzzella and Onder, 30 the values of a, b, c and d are chosen to provide a physically reasonable approximation of the throttle mass flow. Figure 1 demonstrates the output of the fitted curve in equation (5) as a function of p m . At lower values of p m , such that p m \ \ p a , the mass flow is represented by a constant before decaying and approaching zero where The throttle area, A a , is a function of throttle position ðu a Þ given by equation (6) 30 Here, d th is the throttle diameter, a th;0 is the throttle angle offset and A th;leak is the leakage area. The mass flow exiting the manifold is modelled by equation (7) 30 Here, V c is the compression volume at top dead centre; V d is the displacement volume of the cylinders; p e is the exhaust-manifold pressure;k is isentropic exponent air;g 0 , g 1 and g 2 are gear ratios; l is the air to fuel ratio coefficient; and s 0 is the stoichiometric air to fuel ratio. The second differential equation (equation (2)) models the dynamics associated with the engine's rotational motion, which depends on the engine's capacity to generate torque. The torque generated by the engine ðT e Þ is described in equation (8) 30 Here, the terms h 0 ,h 1 ; b 0 ; b 2 are Willans parameters, which are simplifications of the engine's characteristics, while H l represents enthalpy. All other quantities are as defined for earlier equations. Equation (3) is the final differential equation, which models the load torque as a first-order delay. The numerical values used are summarized in Table 1. Data-driven engine model Engine models used in industrial settings often make use of experimental data to generate a dynamic model. For bifurcation methods to be appropriate for an industrial context, they need to be shown to work for these sorts of data-driven models. In this work, a datadriven engine model taken from MathWorks 31 is studied, to show that bifurcation methods can be used to analyse data-driven models as well as equation-based physics models. Figure 2 provides a schematic of the data-driven Simulink model used. The top-level model has two inputs: a transmission torque (in Nm) and a throttle position (expressed as a percentage). The 'EngineTorque' block in Figure 2 is a two-dimensional data table that relates a throttle position and engine speed input to an engine torque output. The input 'Transmission_Torque' represents the external load applied to the engine, which may arise in real operations due to road gradients, drag or rolling resistance. The difference between this externally applied torque and the torque produced by the engine causes the engine to speed up or slow down: the engine speed is determined by dividing the net torque using a representative inertia value ('EnginePlusImpellerInertia') to obtain the engine's acceleration, and then integrating acceleration to calculate engine speed (Ne). Although this is a simplistic model, the data table captures a typical nonlinear relation between engine speed and the torque that is produced (both as a function of throttle position). A visual representation of the data table contained in the model is shown in Figure 3. The model assumes that the data collected is sufficient to capture the relation between throttle position, engine speed and engine torque. The above data provide an accurate torque output based on engine speed and throttle angle. The discontinuous nature of such data, shown in Figure 3, makes a bifurcation analysis of the system in its current form infeasible. As such, the data need to be smoothed with an appropriate interpolation and extrapolation method. Simulink offers several techniques to smooth the data for interpolation and extrapolation purposes: in this study, the inbuilt cubic spline technique is used to fit a continuous third-order polynomial through neighbouring data points in the 'EngineTorque' data block. This ensures data are as smooth as possible for the nonlinear analysis. Figure 4 provides time history simulations at an example operating point ðu a ; u l Þ = ð0:72; 250Þ that shows the trajectory of the engine speed when the data are raw (a) versus that of the data that have been smoothed (b). Bifurcation analysis of physics-based engine model This section contains a demonstration of a bifurcation analysis of the physics-based model and a discussion of the results obtained at each stage. To begin, one parameter will be varied to show the different types of bifurcation that exist in the state-parameter space. With the bifurcations observed, these points themselves can then be traced in a two-parameter continuation, showing any complexities or additional bifurcations that exist under the variation of two parameters. These results exhibit dynamics first presented by the authors in Smith et al., 25 but they provide relevant context in this paper for the subsequent sensitivity study and comparison with results from a data-driven engine model. The sensitivity study shows how the nonlinear dynamic behaviour changes as a third parameter of operational interest is varied, expanding on previous results in Smith et al., 25 by constructing surfaces of bifurcations to show their quantitative variation as a function of three parameters of interest. The analysis also expands on previous work through inclusion of new results showing how peak engine torque varies with these additional parameters. One-parameter continuation results Throttle angle,u a , is chosen as the continuation parameter, with all other model parameters fixed. Of the two model inputs for the system, throttle position is chosen as the primary parameter as this is the input controlled by the user during real operation of the engine. Its range of operation is also known, as the throttle can vary from fully closed to fully open, that is, u a 2 ½0; 1. The second model input, torque, is not controlled by the user in real operation of the engine: torque is applied on the engine as the vehicle accelerates or decelerates in response to driver inputs. As a result, it is viewed as a secondary parameter. An initial load torque, u l , of 100 Nm is chosen to represent an automobile on an incline. As the torque produced by the engine, T l ðtÞ ! u l for t ! ', only the states p m and v e are plotted as a function of the continuation parameter u a . Each bifurcation diagram may contain several branches of equilibria and type of bifurcation. All are described in detail as and when they occur; however, a summary is also provided in Table 2. Figure 5 shows the first bifurcation diagram which shows the steady-state engine response of both manifold pressure (a) and engine speed (b) as a function of u a . The branch of dynamically attracting equilibria (solid line) indicates the steady-state engine speed and manifold pressure for a given throttle position, to which nearby initial conditions will tend towards over time. There is also a branch of dynamically repelling equilibria (dashed line), which are points from where nearby initial conditions will move away over time. The two branches are separated at u a = 0:0617 by a fold bifurcation. As throttle angle is increased, the steady-state engine speed achieved at the attracting equilibria (solid curve, Figure 5(b)) increases rapidly over a narrow range of throttle angles from u a = 0:1 to u a = 0:2, until the engine reaches a speed close to its maximum speed of 500 rad/s; here, throttle angle causes only a slight increase in engine speed to be observed for u a ø 0:2. This is a physically intuitive result, because for a given load, increasing throttle angle will increase engine speed. The attracting equilibria thus represent the achievable steady-state operating point for the engine. The concept of a repelling equilibrium point is less intuitive, because systems at such points are unstablethey are hence seldom observed directly in real systems. An intuitive example of a repelling equilibrium point in a physical system is an 'upside-down' rigid pendulum: mathematically, such a condition exists, but in practice, the pendulum would fall from this position if disturbed even slightly and end up hanging down (at the attracting equilibrium position). Because of their nature, repelling equilibria are difficult to observe through conventional time history simulations, but they are important for the dynamics of the system because they separate nearby initial conditions that lead to different long-term behaviour: in the case of the pendulum, the repelling equilibrium point dictates if initial conditions (value at t = 0 for angular position and angular velocity) will result in the pendulum swinging clockwise or anti-clockwise; for the engine, the repelling equilibria dictate if the initial condition (value at t = 0 for manifold pressure and engine speed) will cause engine speed to increase or decrease. For values of u a . 0:0617, two equilibria exist: one attracting and one repelling. Depending on the initial conditions of the system, the engine will either reach a steady running state (i.e. tend to the solid equilibria over time) or stop running (i.e. engine speed tend to zero). For u a \ 0:0617, indicated by the fold bifurcation in Figure 5, the engine cannot sustain a non-zero steady-state engine response. Therefore, the fold bifurcation indicates the minimum throttle angle required for the engine to maintain a steady-state. Piecing the information together from the description of the bifurcation diagram, it can be shown that the bifurcation diagram enables the destination of any dynamic trajectory to be inferred, that is, work out where the system will end up as t ! '. Figure 6 shows the bifurcation diagram (panel (a)) with four dynamic trajectories included. The simulations are summarized in Table 3. These dynamic trajectories are shown as a function of time in panels (b1)-(b4). Initial conditions A, B and C all have a throttle position of u a = 0:8. Correct interpretation of the bifurcation diagram suggests that initial conditions that start near the attracting branch (solid line) will tend towards this line over time, while initial conditions that start near the repelling branch (dashed line) will tend away from the repelling branch over time. Initial condition A ðv e = 300ðrads=sÞÞ demonstrates a dynamic trajectory that indeed tends away from the repelling branch and towards the attracting branch. Initial condition B ðv e = 550ðrads=sÞÞ likewise tends towards the attracting branch with time, ending up at the same steadystate engine operating point as initial condition A. For initial condition C ðv e = 30ðrads=sÞÞ, the engine speed is now too low, and the nearby repelling branch sends the trajectory towards zero-engine speed. Finally, to observe what happens when no equilibria are present, an initial condition D ðv e = 300ðrads=sÞÞ is chosen with a throttle position u a = 0:05. Under these conditions, the engine speed will tend towards zero due to the absence of any attracting equilibria. The bifurcation diagrams in Figures 5 and 6 were obtained for one specific engine load, u l = 80Nm. To examine the system's dynamics at different load torques, another one-parameter bifurcation analysis can be conducted at a different load torque: in this case, a load u l = 1Nm is chosen to represent a lightly loaded engine. Figure 7 shows the resulting bifurcation diagram for fixed u l = 1Nm. In contrast to Figure 1 which has no equilibria for u a \ 0:0617 and two equilibria for u a . 0:0617, Figure 7 has a single equilibrium for any throttle across the u a range. This is a significant qualitative change as now the system can achieve a steady-state response if u a . 8:37310 3 . Also unlike Figure 5, at u a = 8:37310 3 , a Hopf bifurcation is observed. As with the fold bifurcation, this point separates attracting and repelling branches; however, a Hopf bifurcation gives rise to a periodic response in the form of limit cycle oscillations. In this case, the Hopf bifurcation is supercritical and thus bounds the region in which limit cycle oscillations occur (i.e. u a \ 8:37310 3 ). The amplitude of these limit cycle oscillations can also be obtained using numerical continuation -the solid line shows the maximum and minimum oscillatory values To demonstrate that this periodic behaviour is a function of the mathematical model, Figure 8 contains the dynamic response either side of the Hopf bifurcation. Figure 8(a) and (b) shows the response for u a \ 8:37310 3 and u a . 8:37310 3 , respectively. For u a \ 8:37310 3 , the system exhibits an indefinite oscillatory response, which has a fixed amplitude corresponding to the amplitude found in Figure 7. For u a . 8:37310 3 , after an initial few seconds of transient behaviour, the trajectory settles to steady-state response. A possible explanation for the occurrence of a Hopf bifurcation in the model at low load is that the engine speeds up too much for the amount of air that can be drawn into the cylinders. When the throttle angle is slightly open (e.g. Figure 8(a)), there is enough airflow to increase the engine's speed; however, the momentum gained causes the engine speed to rise to a point at which it cannot be sustained with this throttle angle. At peak engine speed, the air flowing into the cylinder is insufficient to sustain the peak speed, so the engine slows down. During the deceleration, the airflow again becomes sufficient to drive the engine at a given speed; however, the inertia causes the engine to continue to decelerate past this sustainable level to a minimum speed. At this minimum speed, the engine can get sufficient air to sustain a higher speed, so it accelerates, and the process repeats indefinitely. This out-of-phase motion between pressure and speed is observable in Figure 8(a). Figure 8(b) shows a time history simulation at a throttle angle just above the Hopf bifurcation ðu a . 8:37310 3 Þ, indicating these throttle angles provide enough airflow to maintain a steady-state engine speed. The one-parameter bifurcation diagrams in Figures 6 and 7 have shown that the engine model exhibits different dynamic behaviours at different loads. The following section presents a two-parameter bifurcation analysis of the engine in order to study how the engine's dynamics vary with different load conditions. Two-parameter continuation results The evolution of the fold and Hopf bifurcations can be numerically traced throughout the model's stateparameter space as both throttle angle and load torque are varied simultaneously. Computations of this kind are defined as two-parameter continuations: they provide further information about the location of each bifurcation and any interactions between the two. Figure 9 presents the results of the two-parameter continuation and shows the evolution of fold and Hopf loci. There are several features of the graph produced. The fold locus turns back on itself, and the point at which this occurs is known as a cusp bifurcation. The Hopf locus also turns back on itself and then collides with the fold locus: the exact point where the loci collide is called a Bogdanov-Takens bifurcation and terminates the locus of Hopf points. At high torque values, there is only a fold bifurcation, indicating a one-parameter continuation in this region as in Figure 5. At lower torque values, only Hopf bifurcation exists, meaning one-parameter continuations here will result in figures such as that in Figure 7. In the region between the two load cases already considered, as the curves turn back on themselves, oneparameter continuations could contain multiple fold and Hopf bifurcations depending on the corresponding torque values. To demonstrate the behaviour that exists in this region, one-parameter continuation diagrams can be created at appropriate torque values. Figure 5 and is representative of the behaviour from u l = 0 Nm to the cusp bifurcation at u l = 14Nm. The torque value in case (ii) corresponds to a torque value immediately above the cusp bifurcation. The resulting one-parameter bifurcation diagram can be found in Figure 10. Increasing the load torque beyond the value at the cusp point results is a branch of dynamically attracting and repelling equilibria separated by a Hopf bifurcation, with two further fold bifurcations on the repelling branch. The amplitudes of the Hopf bifurcation grow until they collide with the repelling branch, which has changed location due to the fold loci. The resulting collision creates a homoclinic orbit. For throttle angles to the left of the homoclinic orbit, there is only repelling equilibria with no periodic behaviour. Any time history simulation in this region would head towards a zero-engine speed. Throttle angles above the Hopf bifurcation result in a steady-state engine response being achievable. Between the Hopf bifurcation and the homoclinic orbit is the region in which periodic responses occur. A possible explanation for this behaviour is that unlike Figures 7 and 8, there is an extra region of behaviour where, for higher torques at low throttle angles, not enough air can get through to maintain any level of engine speed. For the long-term response to be steady state, the mass flow in must be greater than the mass flow out, that is, m a . m b in equations (1)- (3). For the response to be zero, mass flow out must exceed mass flow in, m b . m a . This result shows that for a certain combination of torque and throttle, neither of these results will be true in the long term. While m a . m b will be true for a moment in time, m b . m a occurs moments later. This process continues indefinably, and hence long-term self-sustaining oscillations occur for this torque and throttle combination. For a higher torque value, both fold bifurcations move position until one leaves the physically meaningful range. Figure 11 contains the one-parameter bifurcation for case (iii) which is an example of a curve featuring a single fold and Hopf bifurcation. As before, there is a homoclinic orbit; however, as the second fold has left the physically meaningful region, no equilibria exist for very low throttle angles. Raising the torque allows the effects of case (iv) to be demonstrated. At this point, the torque is above the Bogdanov-Takens bifurcation, but not high enough to surpass the point at which the Hopf loci turns back on itself. This case is therefore unique as two Hopf bifurcations will exist in this region. Figure 12 contains a bifurcation diagram for the one-parameter continuation at 47Nm showing these two Hopf bifurcations which bound the limit cycle region. Increasing torque to a higher value means these Hopf bifurcations will get closer to each other and the size of the amplitudes of the limit cycles' oscitations will shrink until the two Hopf bifurcations collide and are destroyed. Case (v) is shown in Figure 13 and is representative when torque is raised to 55Nm. The system returns to a structure that is equivalent to the structure shown in Figure 5. This qualitative description is the same for remaining torque values until the peak load torque of 282Nm which is the point at which the minimum throttle angle (denoted by the fold bifurcation) leaves the physical meaningful range u a 2 ½0; 1. Figures 10-13 contain branches of dynamically attracting equilibria that have a similar structure. Using traditional time history simulations alone, the detection of the dynamically repelling equilibria would have been much more difficult to observe and required an extensive number of simulations for a range of initial Figure 11. One-parameter bifurcation diagram showing the response of (a1) manifold pressure and (b1) engine speed (with zoomed view (a2) and (b2) below) for u l = 31Nm.. conditions and parameter values. In addition, the region of limit cycle oscillations is very small, making it difficult to find using time history simulations alone. With the two-parameter bifurcation diagram describing the engine's dynamics in terms of engine speed and load, it is possible to investigate how this picture changes as other parameters are altered. The subsequent section preforms a sensitivity analysis of the results obtained so far, to further demonstrate the usefulness of a bifurcation approach. Sensitivity analysis of the bifurcation diagrams The previous bifurcation analyses present an overview of all the bifurcations in the system's state-parameter space and the type of behaviour to be expected. In this section, the preceding results will be treated as a benchmark case, and the sensitivity of the bifurcations to changes in additional model parameters values is investigated. By varying throttle, torque and a third parameter, a three-dimensional (3D) bifurcation diagram can be constructed by stacking multiple two-parameter continuation results and plotting the results as a surface in terms of the three parameters. These results can be compared to the benchmark case to determine how sensitive these bifurcations are to change in these additional parameters. In the model, a controller is assumed to fix the air-fuel ratio l to the constant stoichiometric value l = 1; however, in a real driving scenario, this value is unlikely to remain exactly fixed and will slightly deviate. Figure 14 shows the bifurcation sensitivity analysis for l. No qualitative change is observed when continuations are ran either side of l = 1, even for values that far exceed any legal boundaries. Extreme rich and lean values are used to demonstrate how insensitive the model is to changes in l. Another constant in the model is ambient pressure which is fixed to P a = 0:98310 5 . Raising the value of P a crudely replicates some of the effects that would be expected by turbocharging the engine. Figure 15 provides the results as continuations were ran to demonstrate the effects of raising this 23P a . This shows that an increase in ambient pressure will greatly increase the peak load torque value as the location of fold bifurcation is altered. The location of the Hopf locus is also altered as P a is changed. The Hopf curve exists for a greater amount of torque before it turns and collides with the fold curve, indicating that periodic responses can be found over a larger torque range. The range in which both fold and Hopf loci exist is shown to grow as P a is increased. Additional continuations can verify this result and also determine whether the relationship between peak load torque and the alteration we wish to make, in this case a change in pressure, is nonlinear. By fixing throttle angle to its maximum value and tracing the fold bifurcation as torque varies with ambient pressure provides the peak load torque value for across the parameter range. Figure 16 shows the exact peak load torque available as ambient pressure is varied across the P a = ½0; 3P a range. A minimum requirement for ambient pressure to produce any torque is observed at 0:23P a . The relationship is determined to be linear. In order to simulate changes to altitude, ambient pressure P a and ambient temperature q a need to be plotted as functions of altitude so the appropriate values can be used as in Figure 17. As this is a demonstration, very simple models are used to calculate ambient temperature and pressure, 32 which are slightly modified to ensure ground level correlated to equal the values in Table 2 and the original study. 30 Conversely, decreasing ambient pressure, along with decreasing the ambient temperature, replicates the effects of running the engine at altitude. Figure 18 provides the results for two continuations when parameters are changed to replicate what they would be in extreme conditions such as an altitude up to 4000 m. In comparison to Figure 15, the changes are reversed: as a higher altitude limits the amount of the airflow into the system, the fold locus changes position resulting in a reduced amount of torque available at a given throttle angle; the area in which the Hopf locus is present reduces for increased altitude. As before, as peak load torque varies with altitude, two-parameter continuations for altitude and peak load torque can be conducted to trace the fold loci and obtain the peak load torque for any altitude. Figure 19 provides insight into how altitude affects the peak load torque. It is observed that as altitude increases, the peak load available nonlinearly decreases. Parameters chosen in the design process may also be altered to determine how they affect the system's dynamics. Figure 20 shows results for an increase (a) Figure 16. Two-parameter continuation showing how peak load torque varies as ambient pressure increases. and decrease (b) in the size of the flywheel. The location of the fold bifurcation locus does not change qualitatively in either alteration; however, the throttle angle range in which the Hopf locus exists increases for a smaller flywheel and decreases for a larger flywheel. An even larger flywheel would cause the Bogdanov-Takens bifurcation to occur outside the physically meaningful range, which may explain why this sort of oscillatory behaviour is not typically observed in actual engines. The bifurcation sensitivity analysis demonstrates how the locations of the fold and Hopf bifurcations are affected should additional parameters, such as lambda, ambient pressure, altitude and engine inertia, be deviated from their fixed value. Such deviations may occur during operation (altitude, pressure, lambda) or design (engine inertia). While no additional bifurcations were observed in the sensitivity study, the parameter values of the fold and Hopf bifurcations were observed to change. Changing the air flow parameters, such as changing altitude or ambient pressure, modified the parameter region in which the fold and Hopf bifurcations were observed. In comparison, changing the size of the flywheel moved the location of the Hopf bifurcation but not the fold bifurcations. These results demonstrate how a bifurcation analysis can be used to study parameter sensitivities: the analysis determines which type of modification, whether it be airflow or mechanical, will modify the parameter values that lead to certain types of dynamic behaviour (such as limit cycle oscillations). Bifurcation analysis of a data-driven model The bifurcation analysis so far has been used to demonstrate the range of dynamics found within a purely physics-based model; however, engine models often contain elements that are data-driven, that is, the dynamic model is built using data from a test cell. Here, a bifurcation analysis of this type of model is presented, in order to demonstrate the applicability of a bifurcation approach to a more industrial-oriented engine model. The data-driven model uses data that are discontinuous in nature, so to ensure smoothness, which is a pre-requisite for a bifurcation analysis, a cubic spline technique is used. As before, a one-parameter continuation for different key torque values will show the qualitatively different behaviour in the system. It is important to distinguish between which areas of the state space are interpolated and which are extrapolated; therefore, this boundary is highlighted. Figure 21 shows a one-parameter continuation for a torque input of 250Nm. Two separate branches are shown. The first branch is bound by two fold bifurcations: bifurcation A and another outside the physically meaningful throttle range. Horizontal lines indicate the extremities of the engine speed values at which data have been obtained. The first branch of equilibria, which is connected to bifurcation A, falls both inside and outside the region in which data exist. Those equilibria inside the region correspond to a steady-state output that could be achieved by the engine; those outside the region are produced from data extrapolation, so there are no data to support their existence. A second branch, connected to bifurcation B, is only present due to extrapolation. Although this appears outside the data range, this artefact is important as it shows the minimum engine speed required by the model to reach a realistic steady-state response. Engine speed initial conditions above branch B will be attracted to the steady-state output on branch A, while those below decay to zero. Figure 22 shows a one-parameter continuation for a torque input of 280Nm, where a qualitatively different response can be seen. Bifurcation A remains in a similar location; however, the associated branch is now entirely outside the data range and realistically the engine would never be able to run at any equilibria on this lower branch. The single branch that exists inside the data region is qualitatively the same as the one for the physics-based model in Figure 5(b) and therefore the same conclusions can be made. That is, the dynamically repelling equilibria highlight the minimum requirement on the initial condition for engine speed, and the fold bifurcation indicates the minimum throttle angle. As before, these fold bifurcations can be traced through the entire state-parameter space. Figure 23 shows the parameter space (a) and state-parameter space (b). Three distinct branches are shown, A, B and C. Branches A and C are never within the data range; however, they bound dynamically attracting equilibria in the data range. Branch A can be seen in Figures 21 and 22 and bounds the lower branch. Branch C is out of bounds in Figure 21; however, this would represent the upper bound. Branch B traces the bifurcation present in Figures 21 and 22. In Figure 21, this point was out of bounds and was considered a dynamic artefact whereas in Figure 22, it was representative of a minimum throttle angle. The exact point in which the qualitative change happens is indicated by the cusp bifurcation. Many additional fold bifurcations exist on branch B. Figure 24 provides a one-parameter continuation in a dense area of fold bifurcation, which are described as 'clusters'. There are no physical properties that would cause this behaviour, and as such, they are considered dynamic artefacts caused by interpolation. This clustering highlights that data gathered in this region are insufficient to determine the correct behaviour. Before any conclusions are made about this region, more operating points should be gathered in this region for a more complete and accurate analysis of this system. Because there are only two parameters available to modify in this simple model, this completes the analysis; however, with more complicated models, a bifurcation sensitivity analysis could be conducted as demonstrated earlier. Conclusion This work offers a complementary approach to analyse the dynamics of a nonlinear engine model by conducting a bifurcation analysis. Two models were used: a physics model featuring three nonlinear first-order coupled differential equations and a smoothed datadriven model in Simulink. Both systems are mapped, and properties of the systems could be identified as bifurcation points in an efficient way using numerical continuation. Branches of dynamically repelling and dynamically attracting equilibria could be traced across the entire parameter range from a known steady-state response, allowing the user to predict the outcome of any time history simulation. The equilibria traced in the system provides physical boundaries on the operation of the engine, with bifurcation points themselves indicating key system properties, such as the fold bifurcation being indicative of the minimum throttle angle and peak load torques, and Hopf bifurcations highlighting regions where oscillations occur in the system. A bifurcation sensitivity analysis provided further information about which changes in additional parameters may cause the system's dynamics to qualitatively change. The regions in which the bifurcations exist in the state-parameter space change as alterations are made to the mechanical and/or air flow parameters. There are many potential applications for the theory in automotive systems as the methods could be applied to any nonlinear powertrain model. Future areas to be considered include an emissions model to determine how emissions output is influenced by control parameters. In addition, knowledge of bifurcations and where qualitative changes in the engine's dynamics response could aid engineers in the development and design of control strategies. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research is funded through the EPSRC Centre for Doctoral Training in Embedded Intelligence under grant reference EP/L014998/1, with industrial support from Jaguar Land Rover.
9,892.4
2020-06-12T00:00:00.000
[ "Engineering", "Mathematics" ]
Bulletin of Electrical Engineering and Informatics Received Aug 19, 2022 Revised Nov 2, 2022 Accepted Nov 18, 2022 This article aims to compare two different scanning devices (360 camera and digital single lens reflex (DSLR) camera) and their properties in the threedimensional (3D) reconstruction of the object by the photogrammetry method. The article first describes the various stages of the process of 3D modeling and reconstruction of the object. A point cloud generated to the 3D model of the object, including textures, is created in the following steps. The scanning devices are compared under the same conditions and time from capturing the image of a real object to its 3D reconstruction. The attributes of the scanned image of the recon-structed 3D model, which is a mandarin tree in a citrus greenhouse in a daylight environment, are also compared. Both created models are also compared visually. That visual comparison reveals the possibilities in the application of both scanning devices can be found in the process of 3D reconstruction of the object by photogrammetry. The results of this research can be applied in the field of 3D modeling of a real object using 3D models in virtual reality, 3D printing, 3D visualization, image analysis, and 3D online presentation. INTRODUCTION Today, there are many high-quality image capture devices. The goal is often to accurately reproduce an image in high digital quality for other possible outputs. These outputs can be not only in the form of images in digital and online environments such as web presentations [1] but also in the form of threedimensional (3D) models [2]. These models can serve as a basis for 3D printing [3], [4] or be used in augmented reality and virtual reality (AR/VR) [5]. The goal of the captured image is not only its high quality but also the short processing time of this image and its sharing or stream [6]. There are several methods for creating 3D models. This article focuses on using image data for modeling 3D clouds of plant structure points by photogrammetry. The structure from motion (SfM) photogrammetry method for obtaining 3D structure from 2D sequences has been used for quite some time [7]. Modern digital image data processing has followed analogous methods of photogrammetry. Today, this method can be divided according to several criteria. Photogrammetry is primarily divided into aerial and terrestrial and, according to the number of image configurations, into single-frame and multi-frame [8]. At present, the focus of research is on aerial photogrammetry. Image data is captured from above by controlling aerial or satellite photographs and nowadays also with a high level of drones and light detection and ranging (LiDar) technologies [9]- [11]. However, this method is considered low-cost and finds practical application in many commercial and 869 scientific fields. The method is used, for example, in architecture, construction, materials engineering, microtechnology, medicine, waste management, culture, conservation, archeology, anthropology, design, forestry, agriculture, forensic sciences, and many others, as current studies have shown [8], [12]- [19]. There are many imaging devices for image data devices in today's digital technology market. The choice of equipment depends on the required quality and output. Photogrammetry works on the principle of using 2D image data, most often in the form of photographs. Therefore, classic compact cameras, mobile cameras, 3D or 360 cameras, laser 3D sceners, drones, or tablets with LiDar technologies, which are now available to the general public, are used for this method [6], [9], [10], [20]. The quality of the scanned image depends on the resolution, color gamut, lighting conditions, and attributes are affect the demand for additional outputs [12], [20], [21]. Post-processing image processing, graphics software, and methods designed to improve the quality and analysis of image data [22], [23]. This article aims not only to 3D modeling of an object into a model of point and texture clouds using the SfM method. Another goal is to compare two scanning devices and their suitability for image data processing by photogrammetry. This experiment uses a 360° camera with six lenses and a classic mid-range mirror camera to capture the image. The acquisition of image data of both devices and methods of subsequent image processing and the achieved results are presented in other chapters of this article. The resulting principles can be the basis for working with natural objects and their structure. Furthermore, for the creation of actual plant 3D models with use not only in agriculture. Possibilities of working with point cloud models and polygonal networks can be applied across disciplines, especially in industry 4.0. and virtual reality. Various image capture options distinguish the sensing devices selected for this experiment and the characteristics of the image obtained. This experiment explores the possibility of using sensing devices in the 3D modeling of point clouds of a particular object and their subsequent analysis. This experiment assumes that two different 3D models will be created, depending on the type of sensing device chosen. For this purpose, a 360 camera with six "fisheye" lenses and a digital single lens reflex (DSLR) camera with an 18-55 mm lens were selected as experimental devices. The object for the 3D reconstruction was Mandarin chosen. The manufacturer envisages the design and use of the 360 camera primarily for direct application and transformation into a 3D image of space. However, this work reconstructs a 3D model of an object in the space in which the image was scanned. A 3D camera is used to reconstruct a 3D model in this work. That model is in the space in which the image was scanned. The purpose is not to create a 3D or VR environment, but in this experiment, the emphasis is primarily on the possibility of using the device for the digital reconstruction of a particular object that is part of this space. For this, the method of photogrammetry was chosen. The prerequisite is to obtain the object model from the model reconstruction from the source image. Potentially, it is also possible to segment the model from the environment for further processing and use of the 3D model. Furthermore, analyze the image quality and information that the object model provides. In this work, the 360 camera is confronted with a device by which the selected object can be reconstructed into a 3D model using the photogrammetry method relatively quickly. That is a classic digital mirrorless camera (DSLR) with a regular lens. These sensing devices have been commonly used for the 3D reconstruction of photogrammetry for a long time. A relatively low price also characterizes the devices compared to the above types of sensing devices and the 360 cameras used in this experiment. Last but not least, this thesis also considers the possibility of combining both devices to obtain a natural 3D environment with models of real objects transformed into this environment with the aim of the most serve authentic reproduction of an actual image in the digital environment. METHOD OF MULTI-IMAGE 3D PHOTOGRAMMETRY The direct linear transformation (DLT) method was proposed as early as 1971 as a basic mathematical photogrammetry model [24]. That is a contactless method of recording an object in space. The basic principle of the photogrammetry method is a definable point located in space, which is shown in at least two images. Photogrammetry generates 3D point positions by projecting lines from the camera position via the charge couple device (CCD) into space. When using two photographs, the intersection of the lines indicates the position of the point. Therefore, the camera angle plays an essential role in the accuracy of this method. Incorrect camera position or orientation will generate incorrect 3D point position. Figures 1 and 2 in reference [7] for the graph of the algorithm. Figure 1 shows the basic theory of photogrammetry and Figure 2 depicts the transformation between two references [7]. ´+ 1 + 2 + 3 + 4 9 + 10 + 11 +1 = 0 ∪ ´+ 5 + 6 + 7 + 8 9 + 10 + 11 +1 = 0 Where I´= I − I 0 ; J´= J − J 0 ; l 1 − l 11 is direct linear transformation parameter (DLTP). Coefficients 1 to 11 are functions of exterior landmarks and interior landmarks. The initial values of the external and internal orientation elements are not needed in the calculation. The DLT equation can be used in the photogrammetry of consumer-class digital cameras [7]. More than just two images are usually needed to reconstruct a specific object into a 3D model. Tens to hundreds of images can be used for 3D reconstruction [25]. The triangulation algorithm finds a common point in these images. Subsequently, another algorithm calculates the camera positions in space during object acquisition. Then the individual values x, y, z are assigned to each point according to the cartesian system [8]. That determines the basic information about the object's position, size, color, and geometric shape in space [25]. Representation of 3D data in a point cloud 3D data in basic form represents point cloud and can provide secondary information about color or intensity [25]. The information from the 2D image is contained in mathematically structured fields. This data already explicitly contains relationships between neighboring points. However, in a general 3D area, point clouds contain neighboring relationships only implicitly. The import internal and external data, the orientation of the scanning device, and the accuracy of its settings can be used as information for the 3D reconstruction of the object, providing image data that can be loaded together with the images. The point cloud is shown in Figure 1. METHOD OF IMAGE CAPTURE AND GENERATE POINT CLOUD The object for capturing image data is a young tangerine tree. A live plant with many green leaves, which define the object, will be converted into a 3D model. The tangerine tree grows in a citrus greenhouse environment and other citrus types. The young tree is 1 m tall, with branches scattering 0.5 m from the trunk. The temperature in the greenhouse environment was 24 °C at the time of image capture. The subject is captured in bright daylight without additional lighting devices. This experiment aims to create a faithful 3D model of the plant's tree structure in point cloud and texture for other possible desired outputs. Furthermore, this work aims to compare scanning devices and their 3D object modeling by photogrammetry. A scanning device for obtaining image data Two scanning devices were used for the input image data in the experiment. It is a 360 camera and a mirror camera with a lens. These devices differ in construction, lens characteristics, use, and type of image captured, as shown in Figure 2(a) DSLR camera Canon EOS Kiss X7 (left) and 360° Insta360 pro camera (right), and Figure 2(b) image taken by Insta360 pro with 200º F2 "fish-eye" lens, and Figure 2(c) image taken by Canon EOS Kiss X7 camera with EF-S 18-55 DCIII. The scanning devices and their properties are described in more detail in the following text. Camera Insta360 pro is a panoramic 360° camera with six 200º F2 lenses. These lenses type the "fish-eye" are located around the circumference of the spherical body of the camera. The camera allows to take quality 3D images and has several modes for capturing images at a resolution of 4,000×3,000 px and 8K image creation. The camera takes six still images in normal mode, which can then be used for stapling and subsequent processing of 8K monoscopic and stereoscopic spherical images. The camera uses its own Insta360 stitcher application to stitch images. The camera also works in the primary mode of capturing the image in RAW format for subsequent image processing, calibration, or creating its color scale in digital negative (DNG) format. Brightness differences can be eliminated using high dynamic range (HDR) mode. This mode takes three series of six shots. These images can be used to stitch a highly dynamic spherical image. Burst mode takes ten series of six shots. The camera allows remote control and image control directly in its application and VR. The Canon Kiss X7 mirror camera is a DSLR camera that works with a maximum resolution of 5,184×3,456 pixels. The camera takes an 18 MP photo. The device also works well in low light. It works with a maximum sensitivity of ISO 12 800 can be expanded to 25 600. It includes the type of complementary metal oxide semiconductor (CMOS) sensor provided by the optical viewfinder and several modes selected according to the type of output and conditions in the environment where the image is captured. As mentioned above, the photogrammetry method can also be applied to this type of device. Type and quality of the captured image Both types of devices were used in this experiment in the same conditions, space, and at the same time. Since these were two different scanning devices with different properties, it is evident that the output image will be different. Especially during post-processing for 3D display in graphics software and subsequent modeling. This chapter presents and compares the fundamental differences in the attributes of the acquired image and subsequent image processing in graphics software. The Insta360 pro 360° camera works with six fixed fisheye lenses around the circumference of the spherical body of the camera. The primary RAW mode was used to capture the image in the greenhouse, and six images were created for subsequent 3D modeling. Figure 2(a) shows both sensing devices used. The reference image is shown in Figure 2(b). A reference image from a series of 53 photographs needed to model the 3D point clouds taken by the DSLR camera is shown in Figure 2(c). The attributes of both of these figures are described in Table 1. Table 1 shows some differences in the properties of the obtained image. Let us mention the difference mainly in the size of the image. The reference image from the single lens reflex (SLR) is 25.4 MB, and the reference image from the 360 camera is 18.3 MB. Therefore, it is necessary to mention that 3D modeling of photogrammetry, especially using cameras, will require the mirror capacity of the computing device. As shown in Figure 1, the resulting image obtained from the scanning devices is very different. That is due not only to the different types of lenses used. In the case of an image obtained from 360° cameras and using the basic RAW format, we can obtain a stitched image from individual images taken by the camera simultaneously. From an SLR camera using a classic set lens, as this experiment shows, we get a classic photo for further editing and output. In the case of capturing the image with an DSLR camera for 3D modeling using the photogrammetry method, we must take subsequent individual photographs for connection to point clouds. A total of 53 photos were taken in this experiment. However, in this experiment, six individual images from the fundamental mode of 360 cameras are used for 3D modeling by photogrammetry. All images from both devices were compressed into JPG format before subsequent image processing. Image processing for creating 3D model As mentioned in previous chapters, several types of scanning devices are designed for different image processing. Also, 3D modeling of objects or environments requires different solutions depending on the image processing and the type of output required. This experiment is focused on the reproduction of a real object in a 3D environment. It compares the possibilities of processing and using acquired image data from two different scanning devices. This section and Figure 3 summarize the process of processing from the device and image transformation into these digital 3D environments, which can then be used for visualization in virtual reality. Figure 3 shows the whole image processing process. From the initial capture of a real object to the output display in virtual reality, the final image analysis can be performed. As mentioned, the 360 camera allows you to process the captured image into a 3D image almost instantly. VR display functions can also be used in real-time. By automatically stitching individual images directly in the camera or using the output panoramic image, we can perform image quality analysis quickly. Adjusting and increasing the quality of the captured image can be done in classic 2D graphics software for photo editing and processing. The primary image adjustment is usually the brightness and contrast of the image. These can often lead to high levels of image quality improvement. In this experiment, this image analysis process was used to check the image quality of the images. Because the image was captured to model point clouds, the six basic images were processed by the same process as the 53 images taken by the DSLR camera. The captured images from the mentioned scanning devices were processed separately. The 2D graphics software Adobe Photoshop (Adobe Systems Inc., San Jose) was used for the primary image quality control. This graphics software is designed primarily for working with bitmap graphics. Today, it is one of many graphics programs that complement and support each other throughout Adobe's creative suite graphics package. In this program, you can improve the image quality well with image attributes editing tools such as brightness, contrast, and more. The images were then uploaded to the 3D modeling software Agisoft Metashape Professional (Agisoft LLC, St. Petersburg), in which secondary quality control of the recorded images took place. This graphic software for working with 3D graphics and image data is a suitable solution for the photogrammetric processing of digital images. The resulting 3D data can be used in many areas of human activity. That can be the documentation of cultural heritage to create visual effects or work in geographic information system (GIS) applications. The software works in an automated data processing system, allowing to set parameters for specific tasks and different image data types. A dense cloud-based generation algorithm for network generation generates network models even for dense point clouds. These clouds can contain hundreds of millions of points. The implemented out-of-source solution in Agisoft software allows for the generation of polygonal models. The recorded images then formed the basis for obtaining the basic cloud of points. That defined the basis of the 3D model of the scanned object, its position, color, and other attributes. A dense cloud was then generated from the subsequently created depth maps, which already precisely defined the 3D object in the point cloud model in the attributes of shape, size, and color. Often, correcting point clouds, defining the area and extent of a 3D object, and reconstructing a polygon mesh make it necessary to modify the object for the final image display. However, point clouds can also be projected into a virtual environment, which is provided only for visual inspection of the image and its analysis. Depending on the output model, it is then generated from the clouds to another desired model. The transformation of a 3D model into a wireframe, and than to a texture model was used in this work. The following chapter compares the attributes and possibilities of the entire image processing into a 3D model and their visualization in individual steps. Creating a primary point cloud from photos All images in the series must be aligned to determine the primary position of the point cloud. Furthermore, control the quality of individual images in the series. Figure 4 visually describes the alignment process and subsequent modeling of the object into point clouds. Figure 4(a) shows a reference frame from a series of 6 frames of photographs taken by the 360° camera. Figure 4(b) shows the position of 6 photos obtained by the 360° camera, from which the primary 3D point cloud model is then defined. Figure 4(c) shows a reference image from a series of modeling images acquired from a DSLR camera. This series contains a total of 53 images. Figure 4(d) describes the 53 images' location and the point cloud obtained from a series of these images. As shown in Figure 4 and from the text above, the resulting point clouds from the 3D software are different. The images from the 360 cameras are arranged around the perimeter, and the obtained cloud of points follows this perimeter. That obtained a 3D model close to automatic or manual stitching of images in 360 camera applications. However, a point cloud cannot be obtained from this for further analysis of the 3D image structure. We obtained basic information about the interior environment in the point cloud. In contrast, 53 images taken by the DSLR camera created a cloud of points, especially in the space inside the position of all images. That obtained a 3D image of the desired object. CREATING 3D TEXTURE MODEL FROM A POUINT CLOUD From the basic image processing to the point cloud, further procedures in reconstructing the 3D image can be continued. The following figures visually show these procedures for processing image data from both scanning devices. Figure 5 shows a vertical view of the developed 3D model and Figure 6 shows a horizontal front view of the same 3D model. These figures compare the same image processing process and the differences in the generated 3D model when using two different scanning devices and different numbers of input photos. Six images were used for the model created from the 360° camera. Fifty-three photos were used to create a 3D model from a DSLR camera. Figures 5 show a vertical view of the image data processing process in a 3D model. Figure 5(a) shows the basic point cloud obtained from 6 photos taken by the 360 camera, and Figure 5(b) shows the next step in image processing. A dense cloud was generated from the base Figure 5(c) shows the polygon mesh creation from dense cloud, and Figure 5(d) visualizes the final form of the 3D object texture. A vertical view of the process of 3D object visualization using a DSLR camera is shown in Figure 5(e), which presents the basic point cloud created from a series of 53 photographs. Figure 5(f) shows the generated 3D dense cloud model, Figure 5(g) the polygon mesh, and Figure 5(h) the texture of the generated 3D object using DSLR cameras. Figure 6(a)-(h) shows the same 3D model from a horizontal perspective in all it is steps of the image processing. The above visualizations of 3D model creation demonstrate the process of further image processing from point clouds. The next step is to create a dense cloud from cloud points, which specifies the object's properties. Then, created a polygonal network that connects all the points in the dense cloud points. In this experiment, the final step is to create a 3D texture model from a polygon mesh. Figures 5 and 6 comparing the structure of the created models show models that differ depending on the devices used and the properties of the scanned image. Table 2 compares the individual attributes of 3D models from the modeling process from both scanning devices. Here are the differences in image processing and its properties. In particular, the different number of photographs entering the modeling process is crucial. This difference is mainly reflected in the number of points in the point cloud/dense cloud, which is also due to the different faces and vertices in the 3D model. It is evident that these facts also affect the time and performance of computing units in modeling and the size of individual files. From the above, it can be assumed that it is more appropriate to use a DSLR camera to reproduce a real object in 3D, although the method for modeling by photogrammetry is more time-consuming and capacity-intensive. The 3D model by the 360 cameras in this experiment copies the character of the 3D perimeter model and is not fully modeled into the middle place, as shown in Figure 5 The detail represents one of the tree branches in the reconstructed 3D model. As can be seen from the image, the 3D model of point clouds is also very different. The model obtained from the 360 camera copies the perimeter of the entire 3D model. The position of the points in the point cloud of the tree model is also generated in this way. Figure 7(b) also shows this. This figure shows that the generated polygonal mesh from the point cloud has a 2D character. Part of the polygonal mesh, in this case, is also the background. Thus, in this way, the 3D model of the tangerine tree is not generated separately but is part of the entire environment. Compared to the above models, the 3D character of the point cloud and polygonal mesh models of the tree model obtained from the source image data of the DSLR camera is noticeable, as shown in Figures 7(d) and (e). A 3D model of the point cloud detail of a branch indicates the position of individual points in space, and a 3D character polygonal mesh is created from these points. Another difference in the attributes of both models is the color information represented in the point clouds of both models being compared. These attributions can best be compared visually in Figures 7(c) and (f). These images show the details of the sheets in the texture model. The model obtained from the 360 camera is visually very near to the image type of the photo and fully corresponds to the background. A detailed view of the sheet detail texture from the DSLR camera model shows the 3D attributes of the model generated from the polygonal mesh, as shown in Figures 7(e) and (f). APPLICATION OF THE CAPTURED IMAGE TO THE VR ENVIRONMENT In the introduction and the previous text, the use of 360 cameras is expected primarily in the application of direct image transformation into a 3D and virtual environment. Figure 8 outlines this procedure. Its use can also be assumed in the combination of several methods of modeling and transformation of a natural environment or objects into one whole. The following describes this method of using the 360 cameras directly to visualize images in VR. At the same time, a 3D model of a tangerine tree created by photogrammetry is added to the created virtual 3D environment of the greenhouse. Figure 8 shows the transmission during image processing from both sensing devices. Figure 8. The process of the 3D model is integrated into a VR-created environment by a 360 camera Figure 8 shows that the first step is to check the image quality in the 3D imaging software. A panoramic image created by 360 cameras was used in this part of the experiment. This image is created automatically by the camera for almost all of its modes, which are described in chapter 3.1. Figure 9(a) shows a panoramic image created by 360 cameras and Figure 9(b) shows its visualization in the FSPViewer 3D imaging software. This 3D imaging software is available free of charge. The software also checked the 3D image. Panoramic photography of the greenhouse environment was subsequently transformed into virtual reality creation software. In this experiment, the freely available Unity graphics software was used. The software is designed for a wide range of uses. It can create 2D and 3D graphics for applications, especially for VR technology. A virtual model of the greenhouse environment was created from the source panoramic photo. In Unity, it is treated as a material created for a specific object. This object is a sphere that corresponds to the character of the photograph. This procedure obtained the same environment as in the FSPViewer software. Attention in the virtual environment was again focused on the object of the tangerine bush. (a) (b) Figure 9. Image processing by a 360° camera (a) panoramic photograph of the greenhouse environment created by 360-camera and (b) visualization of the 3D environment in FSPViewer software The next step in the experiment was to transform the created 3D model of the tree from the DSLR camera. The 3D object of the tree created by photogrammetry was exported from the Agisoft software in two models. The first 3D model was exported from a 3D polygonal mesh model created from points containing dense cloud. This model does not contain a texture. The second model is exported as a 3D texture model. Figures 10(a) and (b) show both exported 3D models for insertion into the virtual greenhouse environment. Both 3D models shown in Figure 10 were exported in v.obj format. The 3D model without texture was imported into a 3D greenhouse environment created in Unity software. The 3D texture model was then used to create the material to visualize the texture of the 3D model in Unity. The material created in this way was therefore applied to the 3D model. That created a model of the tangerine tree, including the surface and color of the structure. Thus, this model became a new object in the created virtual environment of the citrus greenhouse, which is visually presented in Figure 11. The virtual reality greenhouse environment created is shown in Figure 11(a), and a panoramic photo captured by a 360° camera was used for the environment. Figure 11(b) shows a basic 3D model created with a DSLR camera that will be added to the virtual environment. This model is in elemental form without texture, which will be added to the model in Unity software as a separate created material. The 3D model added in this way to the created VR environment is shown in Figure 12. The quality of the created 3D model, especially it is color reproduction, can thus be visually compared. In Figures 12(a) and (b), the natural form of the created 3D object in the VR environment is visualized. This visualization in the VR environment will allow visual quality control of the created model and comparison with the object that is part of the virtual environment. 12 visually represent a combination of two different approaches to using these sensing devices. The 360 camera is a sensing device with which the image can be easily converted into a VR environment. This environment can be used almost immediately to immerse the user in the created VR environment. This experiment used panoramic photography taken with a 3D camera. A 3D model of an object created from a series of photographs taken by a DSLR camera using the photogrammetry method was inserted into this environment as a separate object. It was necessary to use two different 3D models for the final visual output. Combining a 3D model consisting of a polygonal network of point clouds and a 3D texture model resulted in visual tree output approaching a real object. The model shows that it does not reach the quality of reproduction like a tree from a panoramic photo. These and other attributes are under discussion. DISCUSSION OF RESULTS The above experimental visualizations show that both devices can be used for image reconstruction using photogrammetry, but with different intentions of use. The mentioned 360 camera can quickly and easily capture an image for 3D display and quick application in a virtual environment. Therefore, it is unnecessary to convert the scanned image to a point cloud unless another intention is considered. The team can be a deeper analysis of image or point cloud or their transformation into a polygonal network. However, it was possible to show that the object could be reconstructed from the scanned image in this experiment. The selected object in the modeling process copied the perimeter of the environment. The devices are possibly used to model images into point clouds and the wireframe or texture model. The 360 camera has many modes and options for working with the captured image. In the experiment, that RAW mode was evaluated as suitable for the application of modeling by photogrammetry. That raises the question of whether the reconstruction's result would be better if the combination of modes were used. These may be the subject of further research. The advantage of this device is the low number of applied images, higher modeling speed, and lower demands on the capacity of the computing device. However, it is necessary to consider the lower number of generated point cloud points, which are directly dependent on the number of images used and the image information obtained from them. The DSLR camera has significantly better results in the reconstruction of the building than the 360 cameras. The application is suitable for terrestrial multi-image photogrammetry methods and 3D reconstruction objects. Especially when no other equipment is available, for example, the team can be a highquality and powerful 3D laser handheld scanner for scanning an object. Unlike other devices, DSLR cameras are used to model a real object at a low cost. A total of fifty-three images were used in this experiment. They managed to reconstruct the 3D model of the real object. The number of images used plays an essential role in this. Compared to six images from the 360 cameras, a much higher number of points in the point cloud was generated from fifty-three images in this case. That is essential for the next steps in 3D object modeling and its quality. The spatial arrangement and other information about the object in the environment are better specified in the case of DSLR cameras. That also defines the final form of the model by photogrammetry in this case. Another way to use two sensing devices is a combination of their properties and possibilities of use. In the DSLR camera case, the usefulness of using it for creating a 3D model of a real object using the photogrammetry method was confirmed. From the image taken by 360 cameras, the 3D circuit of the reconstructed environment was created by the above method. A 3D model of the tangerine bush object was not created. That tree model copied the shape of the overall 3D model and merged the environment of the greenhouse as a background. Also, the created polygonal network of these elements was created as a whole. The display details in the Figures 8(a)-(f), which also show details of the model created by the DSLR camera. The 3D dimension of the created tree model is noticeable. Differences can also be observed in the detailed display of the sheets of both models. In the case of the 360 camera, it can be theoretically assumed that the desired result could be achieved by using a different camera position and other image capture modes. That is a 3D reconstruction of a real object, a tangerine bush. The 360 camera's ability to transform the captured image into a VR environment was used in another scenario. The option to take a panoramic image using this camera was used. This type of image allows converting to 3D images and VR quickly. These created a digital 3D image of the natural environment. That allows for the widespread use of sensing devices in many areas. The user gets the opportunity to dive into the VR environment very quickly. Especially if this created environment is shared over a greater distance, we can also import a 3D model created from a DSLR camera into this environment. In this experiment, it was necessary to export two types of the created model from 3D modeling software. One without the model's texture and the other a 3D texture model. These models were then combined for use in the software Unity. These created a 3D model of a tangerine bush in a virtual citrus greenhouse environment. The object thus became part of the created virtual environment. The advantage is the ability to manipulate and modify some attributes of the model in the environment. its position, size, or some display options. For example, in the VR environment, the created structure, properties, and shortcomings of the polygonal mesh can be visually analyzed, and more. That allows the 3D model to become a complementary visual tool for identifying errors in the model. The 3D model of the tree is imperfect compared to the visualized tree from panoramic photography. It is thus possible to analyze the shortcomings in the properties of the created model. The most noticeable drawback of the 3D model by photogrammetry is the diffusion effect of the environment. Visually, the blending of colors is noticeable, especially in the foot part of the model, where many undesirable points are created due to the shadow and dense concentration of the leaves. These points are not only a carrier of color information. By connecting these points, a polygonal mesh is created and other models, including a textural one. Therefore, these errors are also reflected in all the other steps of object modeling. However, by visualizing in a virtually natural environment, all attributes can be compared to a real-world view, as this experiment shows. It can also visually help identify all the bottlenecks and properties of the 3D model compared to the visualized image of the object from the created environment. Such a tip of additional control in a VR environment can be applied in many fields. Especially in the issue of the application of materials and their properties. CONCLUSION This experiment aimed to discover the possibilities of two scanning devices to create a 3D model of a real object by photogrammetry. Two different image capture devices were used in this study. It was a 360 camera with six fish-eye lenses around the perimeter of her body and a DSLR camera using a set lens. Because both devices had different construction and property of the acquired image, it could be assumed that their application for 3D modeling will probably be different. This experiment focused on reconstructing the object into a digital 3D form. A mandarin tree in a citrus greenhouse was chosen for the 3D reconstruction to reproduce the 3D model as accurately as possible. Due to the nature and structure of the building, a more difficult reconstruction of the building is also expected. Comparing the process of working with the image from both devices thus allows more efficient use of both devices for future experiments of 3D reconstruction of the environment or image. The 360 cameras can be expected especially for capturing the environment and easy transformation into a virtual reality environment. The DSLR camera continues to appear as a low-cost device for applying the ground multi-image photogrammetry method for the 3D reconstruction of a real object. However, using multiple source images for 3D reconstruction of the building, higher demands are also placed on computer technology. However, the work can assume more complex object transformations into a virtual reality environment.
8,955.8
2022-01-01T00:00:00.000
[ "Computer Science" ]
Differentiating multi-MeV, multi-ion spectra with CR-39 solid-state nuclear track detectors The development of high intensity petawatt lasers has created new possibilities for ion acceleration and nuclear fusion using solid targets. In such laser-matter interaction, multiple ion species are accelerated with broad spectra up to hundreds of MeV. To measure ion yields and for species identification, CR-39 solid-state nuclear track detectors are frequently used. However, these detectors are limited in their applicability for multi-ion spectra differentiation as standard image recognition algorithms can lead to a misinterpretation of data, there is no unique relation between track diameter and particle energy, and there are overlapping pit diameter relationships for multiple particle species. In this report, we address these issues by first developing an algorithm to overcome user bias during image processing. Second, we use calibration of the detector response for protons, carbon and helium ions (alpha particles) from 0.1 to above 10 MeV and measurements of statistical energy loss fluctuations in a forward-fitting procedure utilizing multiple, differently filtered CR-39, altogether enabling high-sensitivity, multi-species particle spectroscopy. To validate this capability, we show that inferred CR-39 spectra match Thomson parabola ion spectrometer data from the same experiment. Filtered CR-39 spectrometers were used to detect, within a background of ~ 2 × 1011 sr−1 J−1 protons and carbons, (1.3 ± 0.7) × 108 sr−1 J−1 alpha particles from laser-driven proton-boron fusion reactions. a round or elliptical pattern, referred to as a "pit", that has brightness variation depending on the pit shape and depth.The number of particles originating from the source can be discerned using the total number of pits detected on the plate.Due to the high density of damage caused by energy deposition of ions as compared to electrons or photons, CR-39 is highly insensitive to electrons, electromagnetic pulses, x-ray, and gamma-ray irradiation. Here, we show that arrays of multiple filtered CR-39 detector plates form a compact and inexpensive particle spectrometer that can be fielded in large quantities for three-dimensional, spatially resolved ion spectroscopy.The fastest, most-occurring accelerated ions in laser-matter interaction experiments are MeV-scale protons, e.g., from surface contamination layers, due to their lowest mass-to-charge ratio.Heavier ions such as carbon ions are accelerated to MeV energies as well.Particle spectroscopy with filtered CR-39 detector plates requires careful pit analysis, e.g., via numerical image processing of microscope data.Different particles of different energies might produce a pit of the same size, which requires calibration of the response of CR-39.In this context, we have performed a calibration for H, He and C ions with energies from hundreds of keV to several tens of MeV, including calibrations for the width of the distribution describing the ion energy loss statistics in the thin observation layer near the surface.After calibration, we present a forward calculation method using an assumed population of particle species such as protons and carbons or other ions to calculate the expected pit distribution in CR-39.Performing a Monte Carlo (MC) χ 2 minimization of the input spectral parameters, a best match to the data is found.The forward calculation is performed simultaneously for several CR-39 plates in designed arrays, equipped with various filters to sample the spectra at multiple energy ranges of the spectral distribution.We show that the particle spectra for protons and carbon ions detected by CR-39 reproduce the spectra measured by a Thomson parabola ion spectrometer fielded in the same experiment. Unbiased pit recognition method After exposure to ions, the CR-39s are etched to enlarge the latent tracks and imaged with a high-resolution optical microscope (see example image in Fig. 1 and Methods).For each microscope image, the number of these track openings and their track parameters must be extracted.This is not a trivial task as evidenced by numerous publications 19,31,[47][48][49][50] .Due to the large separation between source and detector in our experiments, the tracks are mostly circular where it suffices to measure the track diameter, which is the main parameter used in the remainder of this paper.A common approach to obtain the track diameter from an image is to use a Hough transform 19,48 , for which one must first find the edges of the tracks.There are many different approaches to obtain edges in images, e.g., Sobel or Canny edge detection 51 .However, no matter which edge detection algorithm is employed, it will involve thresholding of some kind 19,28,39,49,50 .There is however an inherent issue with thresholding: it introduces a user bias on which grayscale value should be considered as an edge or not. Consider the orange section of the example image, shown in Fig. 1a,b.It contains two families of circles, one faint and one pronounced.To distinguish the very faint pits from the background, one needs to employ a lax grayscale threshold value g for the Hough transform, resulting in the blue circles in Fig. 1b.While practically all pits were detected, this overestimates the track diameters, especially the more pronounced pits.More accurate pit diameters are obtained by using a stricter threshold (red circles), but then only three out of eleven pits are detected.Hence, a single threshold value does not in general allow for the determination of accurate track properties.One could introduce several threshold values, e.g., one for each level of "faintness".But even this leads to inconsistent results because pits with a diameter that slightly exceed the threshold will be grouped together as having the same diameter.This is illustrated in Fig. 1c,d.Pits are first found with a lax threshold (g lax , blue), and those that are also found with a strict threshold g strict are overwritten (red).Visually, the variations in g strict (Fig. 1c) do not differ much from one another.However, even slight variations of g strict lead to very different statistical behaviors emerging in the diameter histograms shown in Fig. 1d.For example, the pronounced peak at D ≈ 1.5 μm for g strict = 98 completely vanishes for g strict = 102.Also note the sudden drop-off at D ≈ 1.5 μm for all histograms, which is an artifact arising from the fact that pits with diameters slightly above this value are erroneously reduced by the imposed strict threshold, causing the corresponding bin at D ≈ 1.5 μm to be artificially inflated.This is the essence of the problem with thresholding: It is an inherent user-dependent input, which leads to the possible emergence or disappearance of peaks in the pit size distributions depending on user-selected thresholds. It is of note that the user bias issue is particularly important if the pits are small with diameters nearing the resolution limit of the microscope.The finite optical resolution leads to softening of the edges of the pit image, making it particularly susceptible to thresholding issues.Longer etching times produce more pronounced pit edges compared to the microscope resolution, reducing the threshold issue.However, this may not always be an option as it eventually leads to track overlap or to the emergence of previously unobserved tracks such as from high-energy protons 18,38 .Therefore, short etching times are favorable to combat a possible oversaturation and track overlap in the detector. The Hough transform may have difficulties in obtaining the pit diameters in a consistent manner, but it is quite adept at finding the pit centers and is used by us for this purpose.For the pit diameter, we have developed a method referred to as the "Half-width-at-half-maximum (HWHM) Method" applying the following reasoning.As one traces a lineout through the center of an idealized, circular pit opening, the encountered grayscale values will gradually decrease from background while crossing the pit boundary until a minimum is reached at the center.Continuing along the lineout back to background, we expect a symmetrically mirrored behavior.Such behavior can be modeled and fitted analytically to achieve a threshold-free criterion for the pit diameter.For sharp pit edges, the gradient is given by the point spread function of the microscope, which typically is described by an Airy function.For a simpler analytic treatment, we take a Gaussian: The zoomed in section (18 μm × 18 μm) of the green frame from panel (a).The blue and red circles depict the result of the Hough transform where the lax threshold is kept constant and the strict threshold parameter g strict is varied slightly.(d) Histogram of number of pits found as a function of their diameter for the total CR-39 image depicted in (a) for the three thresholds.Visually, the variations in the strict threshold (panel c) do not differ all that much from one another, despite leading to inconsistent histograms depicted in panel (d). where B is the minimal height of the function at x = μ, A + B is the height of the tail of the function, and μ and σ 2 are the conventional mean and variance.In a realistic scenario, the pixels will not perfectly follow a Gaussian behavior.Some brighter pixels may show up near the center.To mitigate this noise, a radial averaging is taken instead of taking a lineout.Furthermore, the center pixel is not always the darkest one for any given pit.This offset needs to be accounted for by allowing |µ| to deviate from zero, resulting in a bias-free definition for the pit radius by assigning it to the HWHM of the Gaussian: r HWHM =|µ|+ (2 ln2) 1/2 σ, where σ > 0. The efficacy of this method is illustrated in Fig. S2 in the Suppl.Materials.Note that the HWHM-radius, as it arises from the fitting process, can take on rational values, despite it being represented in unit of pixels.This enables superresolution binning in subsequent count histograms.We note that HWHM is not the only possible criterion and other criteria may be applied (see Suppl.Materials), so long as it is consistently used in both calibration and experimental measurements. Finally, we apply the HWHM method to all pits in the original CR-39 image in Fig. 1a, resulting in the histogram shown in Fig. 2, which is strikingly different and shows two distinct peaks instead of the irregular, broad distribution.The two peaks indicate that two different particle species impacted on this CR-39.This result demonstrates that edge detection algorithms are very susceptible to user defined parameters and lead to incorrect results for CR-39 analysis conditions like those used here.The HWHM method removes this issue. Calibration of pit diameter versus particle species and energy For the calibration of pit diameter versus particle species and their energy, we combined data obtained at a tandem accelerator, from a Thomson parabola ion spectrometer (TP), and from an alpha emitter source (see Methods for details).The irradiated CR-39s were processed via the HWHM method and analyzed for the most probable pit diameter and the width of the distribution (see next section).The results are plotted in Fig. 3.Most strikingly, the calibration curves are not bijective, i.e., for a given pit diameter there are two possible energy values even for single-species irradiation.Even though there are similar results in the literature 18,20,29,45,[52][53][54] , the non-bijective nature for particle identification has not been discussed.Consequently, a particle cannot be identified by pit diameter alone.For example, a pit diameter of 0.8 μm could have been created by a helium ion with about 0.2 MeV or 2 MeV, or by a carbon ion with about 0.1 MeV or about 70 MeV.Furthermore, the curves partly overlap within their error bars for some energies, e.g., carbon ions and alpha particles between 0.1 and 0.4 MeV.When applied to the measurement of few-MeV helium ions from proton-boron fusion, where the expected pit diameters are at around 0.6 μm (~3-5 MeV) in our case, we find an overlap with the peak of the proton curve at 0.1-0.2MeV.Longer etching may further separate the curves from each other 53,54 , but it will not resolve the non-bijectivity. The pit diameters are proportional to the stopping power dE/dx of the incoming particle.As shown in Fig. 3, we obtain a decent match to the data when using a SRIM 55 data table to calculate the energy deposition in the first 2 µm of CR-39.This was obtained by artificially constraining the stopping power dE/dx to an upper limit during the tracking of the ion in the material and including a smooth transition to this limit.The common justification is that a high dE/dx leads to the generation of more secondary δ-electrons that transport a fraction of the energy out of the track volume and thus do not contribute to track formation 18 . (1) Histogram of number of pits found as a function of their diameter for the total CR-39 image depicted in Fig. 1a using the HWHM-method.In contrast to Fig. 1d, the HWHM method results in two distinct peaks at around 0.5 and 1.2 µm, in striking contrast to any of the previous histograms in Fig. 1c, as well as the absence of the artificial cut-off at around D ≈ 1.5 μm.While the stopping power model provides insights into the shape of the calibration curves, the fit to the data is not satisfactory.A better match was obtained by fitting an analytic function to the data.Here, a modified beta distribution was chosen: where β = Γ(p 2 )Γ(p 3 )/Γ(p 2 + p 3 ), Γ the gamma function, and p 1,2,3,4 are the fit parameters.Table 1 summarizes the fit parameters for the three ion species as well as the variance (± 2.35σ) of the residuals between measured data and the fit. Energy loss statistics in the thin surface observation layer of CR-39 Our etching procedure for the CR-39 results in pits of ~ 1 μm diameter and ~ 1 μm depth.The stopping range of MeV-scale ions is 2-10 times longer than this depth.This means the etching only reveals the energy deposited within the first few microns, rendering the CR-39 equivalent to a very thin detector layer.For thin detectors, the energy loss probability distribution is described by the highly skewed Landau (or Landau-Vavilov) distribution 56 , which provides a statistical description of the most probable energy loss µ L and the width of the distribution σ L .It is important to note that while the Landau theory provides a good description of the energy loss fluctuations for relativistic particles, it may not be accurate for slower particles where additional effects, such as electron-electron interactions or lattice effects, become more significant.In those cases, other models 57 or MC simulations 55 may be more appropriate.An effective description is captured by convolving the Landau distribution with a Gaussian, referred to as 'Langau' 58 , which adds another parameter η L to the distribution that describes the Gaussian width. Figure 4 shows that a Langau distribution accurately describes the measured pit size distribution.Note that similar distributions for short etch times have been published earlier, e.g., [19][20][21] , without recognizing their importance for particle identification.A proper inclusion of the Landau or Langau distribution is crucial for a correct interpretation of experiment data from a laser particle acceleration experiment.Figure 4, as well as Figure S3 of Pu source are plotted with diamonds.The dashed lines show the scaling of deposited energy within 2 µm of CR-39 using a stopping power model based on SRIM data.Since the stopping power model deviates towards both high and low energy limits of the data, we used an analytic fit (Eq.2, continuous lines) for a better match.As a measure of the error of the fit, the shaded bands represent the variance (± 2.35σ) of the residuals between measured data and the fit.The fit parameters and variance values are given in Table 1. Table 1.Fit parameters for the calibration curves in Fig. 3 and standard deviation σ of the residuals of the fits.the Supplements, in combination with the calibration data in Fig. 3, show that the Landau tails lead to a strong overlap of the pit diameters for particles of interest, and particularly for alpha particles when protons are present.Crucially, depending on the relative particle flux and overall signal-to-noise level of the data, the tail of the skewed proton distribution may be interpreted as an alpha particle signal, as is shown in the Suppl.Materials. To understand how these distributions vary with ion energy, Fig. 5 shows the scaling of the normalized Langau distribution width σ/μ = (σ L + η L )/μ versus energy.The maximum width is ~ 0.05 for an energy of ~ 10 MeV.This maximum does not coincide with the maximum pit diameter, which is observed to be at 4 MeV.After passing through a minimum at ~ 2-6 MeV, the width of the distribution strongly increases for lower energies.A similar behavior was observed for the He ions (not shown) but shifted towards lower energy values and higher σ/μ values.The increasing σ/μ for energies below 1-2 MeV may be explained by the fact that carbon ions at these energies are fully stopped within a few microns range.The gray vertical bars mark the incoming carbon energy for a range of 1, 2, and 3 μm.Our etch conditions result in a track depth of ~ 1-2 μm, which means that for the low energies the etching has already moved near or beyond the stopping range, where energy straggling leads to a significant broadening of the pit size distribution 18 . Figure 4. Distribution of pit diameters after (2.25 ± 0.06) MeV He ion irradiation.He ions with this energy have a stopping range of 9.91 µm according to SRIM, which is well above the pit depth expected for our etch conditions.The distribution exhibits a tail towards larger pit diameters, which can be well described by a Langau distribution centered at µ L = 0.639 µm with σ L = 0.059 µm and η L = 0.035 µm (continuous line).A pure Gaussian/ Landau (dotted/dashed lines) distribution would underestimate/overestimate the tail.Most probable pit diameter μ (blue) and normalized Langau width σ = σ L + η L (red) for carbon ions versus ion energy.Data from the ion accelerator calibration are plotted as squares.Data from the TP analysis are plotted as dots marking the width of histograms from single microscope images.The larger circles plot the average width, the error bars denote the variance of the data.The continuous blue line is the calibration curve discussed in Fig. 3.The scaling of the relative width of the Langau distribution is non-monotonic and different from the scaling of the most probable pit diameter.The width of the Langau distribution was divided by μ to show the relative scaling in a unitless quantity.In black we plot scaled results of a TRIM simulation for the vacancy production in the first µm of CR-39, which exhibits a similar scaling.The gray vertical lines mark the initial energy of C ions corresponding to a 1, 2, or 3 µm stopping range, which is close to the etched pit depth.Near the end of the ion range, the distribution broadens and can no longer be described by a Langau distribution. For further insights into the scaling of the observed width of the pit distribution for energies above one MeV, we have performed ion tracking simulations with TRIM 55 analyzing the vacancy production to quantify the radiation damage 59 .We simulated the depth-dependent vacancies produced by 2,000 incident ions within a volume of zero to 1 μm of CR-39.Assuming a direct proportionality between the vacancy count and pit diameter in CR-39, we retrieve the most probable vacancy count μ TRIM and width σ TRIM of the distribution (see Suppl.Materials for details).The TRIM results were scaled with a constant factor to convert from vacancy number to pit size.Additionally, the energy values had to be scaled by a factor of ~ 2 to get a better match to our measurements.The shift in projectile energy may be due to different material densities between simulation and experiment, or different observation volumes.The important message is that the well-tested vacancy production model of TRIM shows the same scaling with energy as our measurements, explaining the observed pit size distributions for monoenergetic ions. Forward fitting methodology Having understood the CR-39 response to individual ions, we now introduce a method to analyze pit size distributions for tens of thousands of pits from laser-matter interaction experiments to obtain multi-particle spectra.CR-39 plates are equipped with multiple adjacent filter foils of different thickness and material, which prevent ions stopped within the foil from creating a track in CR-39.Due to the different stopping powers of particles of different mass within the filters it becomes possible to obtain more information about the respective particle energy spectrum.As shown above, simplistic pit size measurements will lead to incorrect particle numbers and their spectra due to significant energy loss statistics and partial overlap of the calibration curves.Here, we propose a refined analysis method, which is based on adding prior knowledge about particle distributions in a self-consistent fashion.Our method was independently developed but turned out to be similar to a method used to infer fusion proton and deuteron spectra in implosion experiments from filtered CR-39 plates 60 .We start with the prior knowledge that laser-driven ion emission spectra from solid targets are almost always exponentially decaying [61][62][63][64] .Most spectra can be described by a Boltzmann distribution: or a modification thereof in case of an isothermal plasma expansion 65 : where N 0 is the total particle yield and k B T describes the slope (or temperature) of the spectrum.This leads to the following observations and conclusions for fitted CR-39 measurements: 1. Filter foils in front of CR-39 act like a high-pass filter: all ions with energies significantly above the filter threshold are transmitted with negligible energy loss, while ions below the threshold are blocked.Near the threshold the energy loss is no longer negligible.We use SRIM data tables to accurately track the energy loss of ions propagating through the filters.2. Exponential particle spectra before the filter are still exponential after the filter, resulting in a broad distribution of pits for each particle species.The measured pit sizes can be described by the product of the energy spectrum after the filter times the calibration curve.3.For any given ion at any given energy, the energy loss probability is described by a Langau distribution. Therefore, the measured pit size distribution is the result not only of the particle spectra times the calibration curve, but also of the convolution with a Langau distribution corresponding to the particle energy and species.4. Instead of unfolding the spectra from measured pit size distributions, which is not trivial due to the strong non-linearities and noise involved, we perform a forward-fitting procedure: Starting with an assumption of the spectral distribution such as Eq.(3) or Eq. ( 4) we generate a calculated pit size distribution for protons, heavy ions (carbons), and alphas each, then add the pit size distributions to a combined histogram and compare this distribution to the measured one: where a particle spectrum dN/dE is first multiplied by the distribution of pit diameter 2r versus energy dE/dr (Eq.2), and the resulting product is then convoluted with a Landau distribution L(r) to include the energy loss probability statistics.The sum is taken over all particle species, e.g., protons, carbon (or boron), and alpha particles.5.Such a forward calculation is performed simultaneously for several CR-39 data, which were all filtered differently and placed next to each other in an experiment, instead of just one filtered CR-39, to be much more sensitive to the spectral shape.6.A best fit is obtained by minimizing a modified, logarithmic χ 2 criterion: where χ j for one filtered CR-39 histogram j is calculated as the squared difference between measured and calculated counts, summed over all diameters from a user-chosen minimum diameter D min to the maximum diameter D max , and normalized by the total calculated counts as a weighting factor.The + 1 is needed to avoid undefined behavior.The total difference χ for all CR-39 forming a common detector is calculated as the sum of squares of the individual χ j . By doing so, the ion spectra parameters (N 0 , k B T) that best fit the measurements can be selected.Our numerical implementation of the forward fit additionally includes compensating for potential etch variations between the different CR-39, as well as compensating for potential particle counting fluctuations compared to the analytic distribution.In the Suppl.Materials, we present artificial pit size distributions calculated by the forward fitting method and, using artificial data with added noise, we verify that the presented minimization method reproduces the true spectra.In the case of a small population of alpha particles compared to protons or carbons, we show that this population can be retrieved with reasonable error bars. Experimental validation The CR-39 spectrometer was used in an experiment campaign at the Laboratory for Advanced Lasers and Extreme Photonics at Colorado State University, Ft.Collins, CO, USA.The 80-fs, 3-J laser pulse was used to irradiate a commercial, 2-mm-thick boron sample.Ions were detected with two sets of filtered CR-39 spectrometers.Further details about the experiment are listed in the Methods sections. Figure 6 shows the counts-versus-diameter histograms for an array of six filtered CR-39 compared to the calculated, best-fitting artificial histograms.The histograms were generated by analyzing 100 microscope images for each filter, comprising a total of 600 CR-39 images.Each measured CR-39 histogram had to be shifted by a few percent (typically below ± 3%) along the diameter axis to achieve the best-possible fit.Additionally, in all the CR-39 from this experiment the proton calibration curve must be shifted by about − 15% with respect to the calibration curve plotted in Fig. 3, indicating that the calibration data were insufficient to determine the peak location of the hydrogen curve. Most particles detected in this experiment are protons, with a most probable diameter of ~ 0.5 µm, and carbon or boron ions with a most probable diameter of ~ 1.2 µm.CR-39 calibration for boron ion was not possible with the data from this campaign.Since the charge-to-mass ratio of boron ions is similar to carbon ions, we expect the two ion species to create very similar pit diameter distributions and consider them virtually indistinguishable.The right tails of both peaks in each CR-39 are mainly due to energy loss statistics (Langau distributions).A good match to the data was obtained using σ L,p = 0.035 μm, η L,p = 0.015 µm for the protons and σ L,c = 0.025 μm, η L,c = 0.009 µm for carbons, in agreement with the expected data for low-energy particles (see Table S1 in the Suppl.Materials) . After manually finding an initial, close match to the data, a MC scan was performed for 50,000 parameter samples, randomly varying the analytic spectra parameters from 0.5 to two times the manually found optimum to find the global minimum.Such a sensitivity scan is used to not only infer the best-fitting parameters but also the error of the fit due to the noisy input.The MC results were afterwards filtered by only those parameters that are within ± 25% of the global minimum.The global minimum was found for N 0 = 1 × 10 11 sr −1 J −1 and k B T = 4.36 MeV for protons with a spectrum described by Eq. ( 4), and N 0 = 1.4 × 10 10 sr −1 J −1 and k B T = 2.01 MeV for carbons with a distribution described by Eq. ( 3). Analysis of the second CR-39 array of the same experiment (CR39a, see Methods) results in 1.6 × 10 11 sr −1 J −1 protons with k B T = 0.5 MeV and 3.4 × 10 10 sr −1 J −1 carbon ions with k B T = 1.5 MeV.For a demonstration of the validity of the forward-fitting method, we compare the spectra obtained by this second CR-39 array to measurements obtained by the Thomson parabola (TP) ion spectrometer 66 , since the CR-39 array was placed at the vacuum port closest to the TP. Figure 7 shows the spectra measured by the TP, compared to the spectra obtained by the CR-39 array, for two different targets.Both the carbon and proton spectra are well reproduced, except for a constant multiplier in the absolute particle flux and some spectral modulations.The CR-39 spectrometer measured twice as many protons as the TP.Note that the scanner used to scan the Fuji BAS-TR image plate fielded in the TP was only coarsely calibrated for protons in an earlier experiment 67 , which can explain the factor of two difference between TP and CR-39 data found here.Due to a lack of carbon ion calibration of the used scanner, we used a calibration by Doria et al. 68 for a different scanner to convert from image plate units to particles.We therefore expect differences in absolute numbers between the ion spectra obtained by the TP compared to CR-39. Since protons from the contamination surface layer may be accelerated enough to trigger proton-boron fusion reactions that result in alpha particles, we included a Gaussian distribution of alpha particles in the forward fit and the MC scan. In Fig. 8 we show two-dimensional maps of the total χ 2 versus particle number N 0 and average energy µ for the alpha particles within the data plotted in Fig. 6 for CR39b.We find a global optimum at N 0 = 1253 and μ = 5.5 MeV for the alpha particles, corresponding to ~ 200 detected alpha particles per shot.A second MC scan assuming an exponential particle spectrum instead of a Gaussian leads to an optimum at zero.This result is reassuring in that it shows that there is a unique solution for the alpha particle spectrum.Analysis of the second CR-39 array (CR39a, Fig. 7a) of the same experiment results in ~ 200 alpha particles per shot as well, but centered at a slightly higher energy of 6.1 MeV. ( 6) i=D min calc i , slightly higher than those detected by CR39b might be explained by particle acceleration due to electrostatic sheath fields near the surface 38 . To further investigate the detection limit of this technique of inferring alpha particles, we performed the experiment on a target where no significant alpha particle emission is expected.For this, in a second experiment, we used a pure graphite plate instead of boron as a target.Performing the same analysis as above results in an eight times lower proton count per Joule of laser energy but a 24 times lower alpha particle yield at a similar energy as for the boron plate, further confirming that the boron plate irradiation produced measurable alpha particles.The non-vanishing alpha particle number from the graphite plate may be attributed to the measurement limit of the technique or to possible secondary reactions that can occur with C, O or N for proton energies above a few MeV.Subtracting the carbon plate alpha signal as a background and correcting for the solid angle, the final alpha yield for the boron plate is N α = (1.3 ± 0.7) × 10 8 sr −1 J −1 .The figure shows the variation of the total χ 2 value for all six filtered CR-39 used in the array, when the assumed number of alpha particles N 0 and their average energy µ or temperature k B T are varied.In (a), we assumed a Gaussian spectrum, characterized by N 0 and µ, with a full-width-at-half maximum of 1 MeV.To generate this plot, the input parameters for all three particles were sampled 50,000 times with a MC method and afterwards filtered by only those parameters that are within ± 25% of the global minimum.The MC scan reveals a global optimum, marked by the dashed lines.In (b), we show results of a second MC scan assuming an exponential distribution, characterized by N 0 and k B T, described by Eq. ( 4), instead of a Gaussian, which results in a global optimum at zero. Conclusions An array of multiple filtered CR-39 detector plates, in combination with careful image analysis, a pit-diameter calibration for ion species and energy, and an understanding of the energy loss statistics, forms a compact and inexpensive particle spectrometer that can be easily fielded in large quantities for three-dimensional, spaceresolved, multi-ion spectroscopy.The application of such a spectrometer is not limited to laser-plasma interaction experiments but can have a much broader impact.CR-39 spectrometers could be fielded in other proton-boron fusion experiments 71 to measure alpha particle yields in strong proton and heavy ion backgrounds with high fidelity.More generally, the spectrometers can be fielded in any experiments where models of the particle spectra exist, provided a calibration for those particles has been performed. To further increase the understanding of the uncertainties of CR-39 multi-ion spectroscopy, more advanced multivariate optimization methods such as Markov-Chain Monte-Carlo sampling 72 may be implemented.Additionally, our current process of manually acquiring microscope images limits the effective detector area to ~ 1 mm 2 per CR39, resulting in a small detection solid angle and in results that may still suffer from insufficient statistics.Replacing current manual methods with an automated high-throughput processing (HTP) system would reduce user-dependent uncertainty and has the potential to handle tens of thousands of samples per day.A HTP system with robot driven processing including parallel etching and microscopy can eliminate laborious error prone tasks, significantly improve statistics, data repeatability and reliability.This would further improve CR-39 ion spectroscopy during preparation, data acquisition, and analysis stages 73 . Although the research presented in this manuscript does not claim to draw a definite approach for the analysis of CR-39 detectors so that distinctive accelerated ion species can be unambiguously distinguished and quantified, it paves the way for further optimizing the analysis of such detectors which are, for example, of fundamental importance and widely used as an alpha particle diagnostic in proton-boron fusion processes.We have found that for short etch times, pits produced by alpha particles from proton-boron fusion reactions have the same diameters as proton pits in the Landau tail.In particular, data analysis without employing an advanced pit recognition algorithm and without including the Landau tail can lead to an overestimate of the inferred alpha yields.In the data presented here, this discrepancy results in a difference of about 200 times higher inferred alpha particle yield (see Suppl.Materials).Such a result would have a significant impact on further conclusions on the viability of laser-driven proton-boron fusion.Nevertheless, the still impressive particle yields from the structured boron sample used in our experiments encourages further investigations into the viability of highcontrast, short-pulse lasers interacting with engineered targets to create advanced ion acceleration schemes, high energy density plasmas, or thermonuclear fusion conditions. CR-39 etching, cleaning and microscopy TASTRAK™ CR-39 plates by Track Analysis Systems Ltd with dimensions 20 × 20 × 1.5 mm 3 , equipped with laserengraved consecutive numbering for identification, were purchased from Mi.am Srl, Italy.After ion exposure, they were etched to enlarge the latent tracks to the point where they can be observed with an optical microscope.The etching was performed for 30 min at 70 °C in 6.25 M NaOH solution to minimize pit overlap and to minimize the visibility of proton tracks, which appear over extended etching periods 18,54 and which could lead to oversaturation of the detector for longer etch times.After etching, the samples are quenched twice in DI-water and rinsed repeatedly.The samples are then stored in DI-water and rinsed individually.Afterwards they are first dipped, then rinsed with isopropanol.Finally, they are air-blown dry.The rinsing and drying steps were helpful in removing any residue from the surface, thus aiding the microscopy.The latter was performed with a Keyence VHX-7000 digital microscope, equipped with a VHX7100 fully integrated head unit.The minimum microscope resolution (Rayleigh criterion) was determined to be 0.4 µm using a commercial high-resolution test chart.During digitization, we apply manual focusing to maintain the optimal focus to within a few percent, allowing for accurate characterization of the pits.For each CR-39, 100 pictures corresponding to a size of 114 × 85 µm 2 and a pixel size of 40.7 nm in the object plane were taken for sufficient statistics. Calibration measurements Calibration measurements for pit diameter versus particle species and particle energy were performed at the tandem accelerator at the Institute for Plasma Physics (IPP) in Garching, Germany, for H, He and C ions, by using a Thomson parabola ion spectrometer fielded at the Texas Petwatt laser facility at the University of Texas in Austin, TX, USA for H and C ions, and from a 239 Pu calibration source for He ions.Starting with the latter, the 1.5 kBq 239 Pu calibration source emits alpha particles with 5422.43 keV.The 6-mm diameter source was placed 5.6 mm from the CR-39 detector.Two different 2 × 2 cm 2 CR-39 samples were equipped with 6 different Al filters, in addition to the air gap, to attenuate the alpha energy between 4.7 and 1.3 MeV.The CR-39 were exposed to the alpha source for 5 min and then etched and processed as described in the main text. A wide-range energy calibration for carbon ions was obtained from a single shot of a laser-driven ion source with a Thomson Parabola 66 , equipped with a large-area (9 × 9 cm 2 ) CR-39 detector and fielded at the Texas Petawatt laser during an experiment.The laser target was a thin foil made of CH to minimize secondary ion contamination along the q/m = 0.5 trace to detect only C ions and protons.After irradiation, the plate was processed as described above.For digitization, 600 images along the q/m = 0.5 trace were taken, including the absolute position of the image on the CR-39 plate with respect to the origin of the parabolic traces.After processing with the HWHM method, the energy-dependent incidence angle per image was calculated to obtain the eccentricity via ǫ = 1 − 1 − b 2 /a 2 , where a and b denote the half-axes of the ellipse, to compensate for the eccentricity of the pits due to the deflection in the TP and corresponding non-normal incidence on the CR-39.Without this correction we noticed a systematic offset of the measured pit diameters compared to the diameters obtained at the tandem accelerator at normal incidence for the same carbon energies.The same CR-39 plate was intended to be used to measure pit size versus energy for protons.However, no proton pits could be detected.The CR-39 plate had rectangular cutouts in regular intervals to detect the ion spectrum on an image plate underneath the CR-39.The image plate shows a clear and strong proton trace up to several tens of MeV energy.The low-energy cutoff of the instrument is at ~ 0.5 MeV for protons.This clearly demonstrates that proton pits above 0.5 MeV are too small to be detected in our configuration. To obtain pit diameters at very low to intermediate energies, a series of calibration measurements was performed at the IPP Tandem accelerator in Garching, Germany.We used Rutherford backscattering (RBS) in a 100-nm-thin Au foil to attenuate the beam from the minimum accelerator flux rate of 10 9 ions/cm 2 /s to the required levels for calibration.RBS leads to a slight broadening of the particle spectrum due to partial energy loss of the ions in the foil.The backscattered ion spectra were calculated with the software SIMNRA 74 .Depending on the ion energy, the energy loss was between 10 and 40%.Using the energy loss to our benefit, we obtained data down to almost 0.1 MeV.The maximum energies were 4 MeV for H, 8 MeV for He, and 10 MeV for C ions.For each ion energy, up to 20 CR-39 plates with 2 × 2 cm 2 area were fielded simultaneously at angles between (180 ± 25)° to increase the likelihood of good irradiation statistics.Post-irradiation, the CR-39 were processed as described above. CSU experiment details The experiment campaign was performed at the Advanced Laser for Extreme Photonics (ALEPH) at Colorado State University (CSU).ALEPH is a frequency-doubled Ti:Sapphire laser system that can deliver up to 0.85 PW at a central wavelength of 400 nm with excellent laser contrast 75 .During this experiment the laser was focused onto boron targets using an f/2 off-axis parabolic mirror (OAP, see Fig. 9), which in this campaign delivered around 2.5 J in 88 fs within a focal spot of 1.6 µm FWHM.Analysis of the focal spot via a high-dynamic-range image reconstruction showed that the laser pulse reached an intensity of 4 × 10 20 W/cm 2 on target. The targets were positioned in the laser focus using a motorized XYZ-stage.Within one experimental run up to 6 individual targets were irradiated.Boron targets were made from a commercial (Goodfellow Cambridge Ltd.), 2-mm thick, hot-pressed boron plate with a rough surface.Note that without special treatment all targets in such experiments exhibit a few-nm-thick layer of hydrocarbon impurities from CH, oil or water vapor on their surfaces.To diagnose the accelerated ions, two different particle diagnostics were fielded in the vacuum chamber.These diagnostics consisted of two Thomson Parabola (TP) ion spectrometers, one positioned along the laser propagation direction (TP1), while the second one was placed close to the OAP with an angle of ~ 35° with respect to the target normal (TP2).Additionally, two arrays of seven CR-39 solid-state nuclear track detectors with different filters were positioned at distances of ~ 1.7 m from the interaction point at two different angles with respect to the target normal (45° for CR-39a & 65° for CR39b).Custom, 3D-printed frames allowed reproducible placement of the 2 × 2 cm 2 CR-39 via slots on the side of the frames.Each frame was covered with one of the following filters: 1 μm Mylar, or aluminum of 2 μm, 4.5 μm, 8 μm, 12.5 μm and 25 μm thickness.The central CR-39 served as a witness sample and was etched immediately following a shot series to verify that the particle flux was not saturating the detectors. Figure 1 . Figure 1.(a) Example of a grayscale image of a 114 µm × 85 µm section of an etched CR-39 plate.The orange and green frames denote sections of the image that are considered more closely for illustrative purposes.(b): Zoomed Sect.(10 μm × 10 μm) of the orange frame from panel a).The blue and red circles are the result of applying a Hough transform on the corresponding binary images with g = 132 and g = 100, respectively.The background had grayscale values around 137-142.(c):The zoomed in section (18 μm × 18 μm) of the green frame from panel (a).The blue and red circles depict the result of the Hough transform where the lax threshold is kept constant and the strict threshold parameter g strict is varied slightly.(d) Histogram of number of pits found as a function of their diameter for the total CR-39 image depicted in (a) for the three thresholds.Visually, the variations in the strict threshold (panel c) do not differ all that much from one another, despite leading to inconsistent histograms depicted in panel (d). Figure 2 . Figure 2.Histogram of number of pits found as a function of their diameter for the total CR-39 image depicted in Fig.1ausing the HWHM-method.In contrast to Fig.1d, the HWHM method results in two distinct peaks at around 0.5 and 1.2 µm, in striking contrast to any of the previous histograms in Fig.1c, as well as the absence of the artificial cut-off at around D ≈ 1.5 μm. https://doi.org/10.1038/s41598-023-45208-xwww.nature.com/scientificreports/ Figure 3 . Figure 3. CR-39 calibration curves for protons (gray), helium ions (red) and carbon (C 6+ , blue) from 0.1 MeV to 70 MeV ion energy, after 30 min of etching in 6.25 M NaOH at 70 ℃.Note the logarithmic abscissa.The symbols depict the most probable diameters and the error bars the ± σ-widths of the pit diameter distribution for each data point.Calibration data points obtained at the ion accelerator are plotted with squares, carbon results from the Thomson parabola ion spectrometer are plotted with circles, and He ion (alpha particle) data from a 239 Figure 5 . Figure5.Most probable pit diameter μ (blue) and normalized Langau width σ = σ L + η L (red) for carbon ions versus ion energy.Data from the ion accelerator calibration are plotted as squares.Data from the TP analysis are plotted as dots marking the width of histograms from single microscope images.The larger circles plot the average width, the error bars denote the variance of the data.The continuous blue line is the calibration curve discussed in Fig.3.The scaling of the relative width of the Langau distribution is non-monotonic and different from the scaling of the most probable pit diameter.The width of the Langau distribution was divided by μ to show the relative scaling in a unitless quantity.In black we plot scaled results of a TRIM simulation for the vacancy production in the first µm of CR-39, which exhibits a similar scaling.The gray vertical lines mark the initial energy of C ions corresponding to a 1, 2, or 3 µm stopping range, which is close to the etched pit depth.Near the end of the ion range, the distribution broadens and can no longer be described by a Langau distribution. Figure 7 . Figure 7.Comparison of ion spectra measured by the Thomson parabola (TP) ion spectrometer compared to the CR-39 method.(a) shows the measurement described in text.In (b) we show results from a different sample to demonstrate the reliability of the method in inferring particle spectra.The best-fitting spectra for the CR-39 are similar to those measured by the TP, demonstrating the validity of the method.The dashed, vertical lines mark the filter thresholds of the used mylar and aluminum filters (see Fig. 6 for details), which were tuned for few-MeV heavy ions. Figure 8 . Figure 8. Sensitivity scan for alpha particles.The figure shows the variation of the total χ 2 value for all six filtered CR-39 used in the array, when the assumed number of alpha particles N 0 and their average energy µ or temperature k B T are varied.In (a), we assumed a Gaussian spectrum, characterized by N 0 and µ, with a full-width-at-half maximum of 1 MeV.To generate this plot, the input parameters for all three particles were sampled 50,000 times with a MC method and afterwards filtered by only those parameters that are within ± 25% of the global minimum.The MC scan reveals a global optimum, marked by the dashed lines.In (b), we show results of a second MC scan assuming an exponential distribution, characterized by N 0 and k B T, described by Eq. (4), instead of a Gaussian, which results in a global optimum at zero. Figure 9 . Figure 9.The figure illustrates the experimental setup at the ALEPH laser system, featuring the incoming laser beam, the off-axis parabolic mirror (OAP) to focus the laser pulse onto the target placed at the center of the target chamber, and the ion diagnostics that consisted of Thomson Parabola ion spectrometers (TP) and two sets of filtered CR-39 detectors.The left inset shows the configuration of CR-39 arrays in custom, 3d-printed frames (dark grey structure) equipped with filters of different thickness.The right inset shows a microscope image of a boron target surface featuring modulations from nanometers to about 10 µm.
10,817.8
2023-10-24T00:00:00.000
[ "Physics", "Engineering" ]
Influence of Stock Application Attributes on Consumer Choice Decision (Case Study of Stockbit Consumer Choice) : The development of stock investment in Indonesia has experienced significant growth from year to year, according to the President Director of KSEI (Indonesia Central Securities Depository), Uriep Budhi Prasetyo said, "The growth of stock investors is one of the benchmarks for the achievement of the Indonesian stock market, growth occurs significantly during the Covid-19 pandemic, this shows that the Indonesian people are increasingly aware of the importance of investment, especially in the stock market. In the current era of digitalization, almost all daily activities can be done online and most of them can be easily accessed via their personal mobile phones. The number of mobile applications for stock investment has tightened competition. Not only securities from bank companies but also from non-bank companies also enliven the competition in this sector. The many alternative choices of course provide flexibility for consumers in choosing which application suits the needs of each consumer. Through this research, the authors aim to determine what factors influence consumer decisions when choosing stock applications. This study will use the choice modeling analysis method by providing 28 scenario-stated preferences resulted from NGENE software in the form of questionnaires to 200 respondents, in each of these questionnaires there are several attributes of stock applications as consideration for consumers in choosing stock applications including application performance, completeness of features, user security and privacy, transaction fees, and application appearance (UI). The results of this questionnaire are processed using the Multinomial Logit (MNL) method which is run using Python-Biogeme. The results of the study show that in general there are 4 attributes that significantly influence consumer decisions in choosing stock applications, including application performance, completeness of features, user security and privacy, and transaction fees. In addition, this research also shows several attributes that are elastic to consumers in certain applications, namely the user security & privacy and transaction fees in the Ajaib application and in the IPOT application is completeness of feature and transaction fees. INTRODUCTION The development of stock investment in Indonesia has experienced significant growth from year to year, according to the President Director of KSEI (Indonesia Central Securities Depository), Uriep Budhi Prasetyo said, "The growth of stock investors is one of the benchmarks for the achievement of the Indonesian stock market, growth occurs significantly during the Covid-19 pandemic, this shows that the Indonesian people are increasingly aware of the importance of investment, especially in the stock market According to press reports published by KSEI (Figure 1), the number of stock investors in Indonesia in June 2022 exceeded 4 million, which increased by 15.96% compared to last year.KSEI also said that 95% of the increase in the number of investors was due to the convenience of opening accounts online which greatly helped the public to become investors in the stock market. In the current era of digitalization, almost all daily activities can be done online and most of them can be easily accessed via their personal mobile phones There are several official applications registered with the OJK that allows users to open customer fund accounts (RDN) and trade stock on the Indonesia Stock Exchange, including Ajaib, Stockbit, IPOT, MOST by Bank Mandiri, BIONS by BNI, BCAS Best Mobile, etc.The number of these applications certainly provides alternatives and flexibility for the consumers to determine which applications they like or trust to buy and sell shares so that every company in this industry needs to know what investors consider important in a stock trading application to be able to attract and make investors interested in using the application.This is in line with the results of research conducted by Ahmad R. A in 2021 which states that the stock application feature has a positive effect on interest in investing online.As many stock buying and selling applications as previously mentioned, there are 5 best applications according to a survey conducted by Jakpat in 2022, this survey was conducted on 2333 respondents aged 15-44 years in Indonesia.The result of this survey show Ajaib was the best application with a percentage of 67%, followed by IPOT 31 %, Stockbit 31%, BIONS and MOST with 19%.This means that currently, Stockbit is still not the first choice of consumers in choosing stock applications.Therefore, it is important for Stockbit to know consumer needs related to stock applications as a basis for building business and marketing strategies in the future.This is in accordance with the statement Green & Srinivasan (1978, 1990) that for many years conjoint analysis has been used to estimate the importance of various product attributes for consumers purchasing decisions. LITERATURE REVIEW A mobile application is a software or set of programs that run on a mobile device to perform certain tasks for its users.Mobile applications have a wide range of uses ranging from calling, sending messages, browsing, buying and selling, gaming, playing audio or video.A large number of mobile applications are pre-installed on the phone and some can be used by downloading from the internet or mobile application provider platforms such as playstore or app store (Islam, 2010).Product attributes are the development of a product or service that involves the benefits that the product or service will offer, product attributes including product information, quality, and prices had a positive effect on purchase intention (Kotler and Armstrong, 2012).Basically, consumers will go through 5 stages in the purchasing decision process, starting from need recognition, information search, evaluation of alternatives, purchase decisions, and finally post-purchase behavior.Consumers can skip this stage or even reverse the process on the regular or routine purchase process (Kotler & Keller, 2012).Therefore, the researcher determines the hypothesis as follows, HI: Product Attributes have a positive effect on consumer decision From the explanations and hypotheses determined above, this study will analyze the effect of product attributes on consumer decisions on choosing stock applications. METHODOLOGY In this research authors use the choice modeling analysis method by providing 28 scenario-stated preferences resulted from NGENE software in the form of questionnaires to 200 respondents.Stated Preference is an experiment to find out preferences about an alternative compared to other alternatives.Respondents' or user preferences will indicate whether they like or dislike a product or service (Kotler, 1997).In each scenario, the author will attach several attributes of stock applications as considerations for consumers in choosing stock applications including application performance, completeness of features, user security and privacy, transaction fees, and application appearance (UI).The data obtained from the survey will be analyzed and processed using the choice model method.This model will calculate which attribute is important by using several steps.This model will be processed using multinomial logit (MNL) modeling techniques with python-biogeme as the software (Bierlaire 2016).The use of Biogeme can perform hypothesis testing about parameters and can estimate research parameter models (Washington.Edu, 2019).According to Koppelman and Bhat (2006), Utility is an important aspect of this method.and alternatives are selected if their utility is greater than the utility of other alternatives in the choice set.To support this research, the authors also asked a number of questions related to the demographics of respondents such as name, gender, age, domicile, occupation, income and expenses per month, amount of investment per month, and what stock applications have been or are being used. RESULT AND DISCUSSIONS Respondent Profile A. Gender Based on the survey that was conducted, it was found that the gender distribution of the respondents was as follows: C. Domicile The distribution of domiciles for stock application users is quite widespread at this time, as illustrated by the survey of 200 respondents in Table 3.The results of a survey of 200 respondents showed that most of them came from Jabodetabek with 84 respondents or 46% of the total respondents, followed by Bandung City with 61 people (33%), and the rest came from other cities in Indonesia and outside Indonesia.This is in line with the results of a survey conducted by the Financial Services Authority (OJK) on a national survey of financial literacy and inclusion in 2016.The results stated that the highest financial literacy index was occupied by DKI Jakarta with a percentage reaching 40%, followed by West Java.with a percentage of 39%. 4 shows an overview of the characteristics of respondents based on their occupation. E. Monthly Income The following is monthly income data from 200 respondents which aims to see the financial capabilities of respondents, the data is shown in Table 5.Based on the data in Table 5, respondents are divided into income groups whose numbers do not differ much from one group to another, where the IDR 1,000,001 -IDR 5,000,000 income group has the highest number with 76 respondents or with a percentage of 38% from total respondents, then as much 51 respondents or 25.5% are the IDR 5,000,001 -IDR 10,000,000 monthly income group, following by the income group of more than IDR 10,000,000 is the third most income group with 45 respondents or with a percentage of 22.5%, and the last 28 respondents or 14% are the monthly income group of less than IDR 1,000,000. F. Monthly Expenses The author also divides the monthly expenditure of respondents into 4 expenditure groups.6 it can be seen that the majority of the 200 respondents or as many as 112 respondents (56%) claimed to have monthly expenditures of IDR 1,000,001 -IDR 5,000,000, followed by the group of monthly expenditures of less than IDR 1,000,000 with 40 respondents or 20% total respondents.The third majority, as many as 31 respondents or 15.5%, is included in the monthly expenditure group of IDR 5,000,001 -IDR 10,000,000.then the remaining 17 respondents have a monthly expenditure of more than IDR 10,000,000.When compared with the amount of income per month, it can be seen that there is an increase in the IDR 1,000,001 -IDR 5,000,000 and less than IDR 1,000,000 monthly expenditure as well as a decrease in respondents in the IDR 5,000,001 -IDR 10,000,000 and more than IDR 10,000,000 monthly expenditure group.So, it can be said that the majority of respondents have monthly expenses that are smaller than their monthly income. G. Monthly Investment Every stock investor certainly has their own portion in allocating the amount of investment.Table 7 displays the amount of investment per month from the respondents.Based on Table 7 it can be seen that the majority of respondents or as many as 175 respondents (87.5%) invest per month with an amount of less than IDR 10,000,000, then as many as 16 respondents or 8% of the total respondents invest IDR 10,000,001 -IDR 50,000,000, it is quite interesting when we see that the third majority or as many as 6 respondents (3%) make investments per month with an amount of more than IDR 200,000,000, then as many as 2 respondents (1%) make investments per month of IDR 50,000.001-IDR 100,000,000, and the last 1 or 0.5% of respondents make a monthly investment of IDR 100,000,000 -IDR 200,000,000. Application Attribute Analysis After conducting a survey by distributing questionnaires to 200 respondents, the authors processed the existing data using multinomial logit modeling techniques (MNL) using Python-Biogeme (Bierlaire, 2020).By using this method, the writer can find out which attributes have a significant influence on the respondent's decision in determining the alternative stock application.Table 8 shows the results of the Biogeme simulation.From the results of the biogeme simulation above, it can be seen that there are several attributes and constants that are significant and not, where significant attributes are marked in red, which means that this attribute has a t-test value > 1.96 or P-values with values < 0.05.The author selects significant attributes into Table 9. Constant IPOT with a t-test value of 2,56 which is more than 1,96 shows a significantly positive, which means that respondents will tend to prefer IPOT over Stockbit. • The attribute of application performance quality in the Ajaib application has a t-test value of 6,56 where this value is greater than 1,96 (significantly positive) meaning that the better the quality of the application performance of the Ajaib application, the respondents will prefer to choose the Ajaib application. • The completeness of features attribute in the Ajaib application has a t-test value of 3,86 where this value is greater than 1,96 (significantly positive) meaning that the more complete the features in the Ajaib application, the respondents will prefer to choose the Ajaib application. • The attribute user security and privacy in the Ajaib application has a t-test value of 7,56 where this value is greater than 1,96 (significantly positive) meaning that the safer the user security and privacy of the Ajaib application, the respondents will prefer to choose the Ajaib application. • The transaction fees attribute for the Ajaib application has a t-test value of 2,29 where this value is greater than 1,96 (significantly positive) meaning that the higher the Ajaib application transaction fees, respondents will prefer to choose the Ajaib application. • The completeness of features attribute in the IPOT application has a t-test value of -3,38 where this value is greater than 1,96 (significantly negative) meaning that the more complete the features in the IPOT application, less likely respondents will choose the IPOT application. • The transaction fees attribute for the IPOT application has a t-test value of -4.76 where this value is greater than 1,96 (significantly negative) meaning that the higher the IPOT application transaction fees, the less likely respondents will choose the IPOT application. • The attribute of application performance quality in the Stockbit application has a t-test value of 8,16 where this value is greater than 1,96 (significantly positive) meaning that the better the quality of the application performance of the Stockbit application, the respondents will prefer to choose the Stockbit application. • The completeness of features attribute in the Stockbit application has a t-test value of 2,51 where this value is greater than 1,96 (significantly positive) meaning that the more complete the features in the Stockbit application, the respondents will prefer to choose the Stockbit application. • The attribute user security and privacy in the Stockbit application has a t-test value of 3,69 where this value is greater than 1,96 (significantly positive) meaning that the more secure the user security and privacy of the Stockbit application, the respondents will prefer to choose the Stockbit application.After obtaining a number of significant attributes, the authors measure the elasticity of each significant attribute of each application.This measurement was carried out using Python Biogeme where when the elasticity value of an attribute is greater than 1 then this attribute is elastic, and if the elasticity value of an attribute is less than 1 then it is included in the inelastic category (Kenton, 2020). Figure 1 . Figure 1.Stock Market Investor Data Figure 2 . Figure 2. Conceptual Framework Based on a survey to the respondents, Table Table 1 . Respondent Gender Summary Table 1 , the distribution of stock application users based on gender is dominated by males with a total of 145 users or 72.5% of 200 respondents, while female stock application users only contributed 27.5% or as many as 55 people.This percentage is arguably in line with the demographic data of stock investors published by the Indonesia Central Securities Depository (KSEI) in February 2022, where male investors have a percentage of 62.8% and female investors 37.2%. (Solomon, 2004)needs and interests change with age(Solomon, 2004).Therefore, it is important for companies to know the age distribution of their users.In this study the authors divided the age group into 6 groups with at least 17 years of age.Table4.2displays a summary of the age distribution of the respondents.ISSN: Table 2 . Respondent Age SummaryBased on the results of the survey that was conducted, it can be seen that most respondents came from the age group of 17-24 years with 114 people or 57% of the total respondents, followed by the age group 25-34 years with 32.5% or as many as 65 respondents.35-44 years with 8.5% of total respondents, and the rest are from the older age group.When compared, the age distribution of respondents who use the stock application is quite in line with the statement published by the Indonesia Central Securities Depository (KSEI), where at the end of semester 1 of 2022 the majority of stock investors are Gen Z and millennials or aged under 40 years with a percentage reaching 81.64%. Table 4 . Respondent Occupation Summary Based on Table4, the characteristics of the 200 respondents based on occupation are divided into 11 types, where respondents with student college occupations are the largest number with 82 respondents or 41% of the total 200 respondents, apart from that private employees also have quite a large number with 71 respondents or 35.5 %.Respondents with public servant totaled 14 people or 7%, followed by freelancers with 10 people or 5%.And the remainder or 11.5% of respondents came from quite a variety of occupations ranging from entrepreneurs, BUMN employees, students, housewives, lawyers, ojek online drivers, and fresh graduates. Table 5 . Respondent Monthly Income Summary Table 6 . Respondent Monthly Expenses Summary Table 7 . Respondent Monthly Investment Summary Table 9 . Significant Attribute and ConstantBased on the data in Table9, the survey results from 200 respondents can be interpreted as follows:• Constant Ajaib with a t-test value of -2,25 which is more than 1,96 shows a significantly negative, which means that respondents tend not to prefer Ajaib over Stockbit. ISSN: Table 10 . shows the results of elasticity measurements.Table 10 Attribute Elasticity Result
4,041.8
2023-01-30T00:00:00.000
[ "Business", "Economics" ]
Fusion of Membrane Vesicles Bearing Only the Influenza Hemagglutinin with Erythrocytes, Living Cultured Cells, and Liposomes* Membrane vesicles, bearing only the influenza viral hemagglutinin glycoprotein, were reconstituted fol- lowing solubilization of intact virions with Triton X-100. The viral hemagglutinin glycoprotein was separated from the neuraminidase glycoprotein by agarose sulfanilic acid column. The hemagglutinin glycoprotein obtained was homogenous in gel electrophoresis and devoid of any neuraminidase activity. A quantitative determination revealed that the hemolytic activity of the hemagglutinin vesicles was comparable to that of intact virions. Incubation of fluorescently la- beled hemagglutinin vesicles with human erythrocyte ghosts (HEG) or with liposomes composed of phospha- tidylcholine/cholesterol or phosphatidylcholine/choles-terol/gangliosides, at pH 5.0 but not at pH 7.4, resulted in fluorescence dequenching, Very little, if any, fluorescence dequenching was observed upon incubation of fluorescently labeled HA vesicles with neuraminidase or glutaraldehyde-treated HEG or with liposomes composed only of phosphatidylcholine. Hemagglutinin vesicles were rendered non-hemolytic by treatment with NH,OH or glutaraldehyde or by incubation at 85 “C Membrane vesicles, bearing only the influenza viral hemagglutinin glycoprotein, were reconstituted following solubilization of intact virions with Triton X-100. The viral hemagglutinin glycoprotein was separated from the neuraminidase glycoprotein by agarose sulfanilic acid column. The hemagglutinin glycoprotein obtained was homogenous in gel electrophoresis and devoid of any neuraminidase activity. A quantitative determination revealed that the hemolytic activity of the hemagglutinin vesicles was comparable to that of intact virions. Incubation of fluorescently labeled hemagglutinin vesicles with human erythrocyte ghosts (HEG) or with liposomes composed of phosphatidylcholine/cholesterol or phosphatidylcholine/cholesterol/gangliosides, at pH 5.0 but not at pH 7.4, resulted in fluorescence dequenching, Very little, if any, fluorescence dequenching was observed upon incubation of fluorescently labeled HA vesicles with neuraminidase or glutaraldehyde-treated HEG or with liposomes composed only of phosphatidylcholine. Hemagglutinin vesicles were rendered non-hemolytic by treatment with NH,OH or glutaraldehyde or by incubation at 85 "C or low pH. No fluorescence dequenching was observed following incubation of non-hemolytic hemagglutinin vesicles with HEG or liposomes. These results clearly suggest that the fluorescence dequenching observed is due to fusion between the hemagglutinin vesicles and the recipient membranes. Incubation of hemagglutinin vesicles with living cultured cells, i.e. mouse lymphoma 5-49 cells, at pH 5.0 as well as at pH 7.4, also resulted in fluorescence dequenching. The fluorescence dequenching observed at pH 7.4 was inhibited by lysosomotropic agents (methylamine and ammonium chloride) as well as by EDTA and NaN,, indicating that it is due to fusion of hemagglutinin vesicles taken into the cells by endocytosis. Studies on the biological activity of isolated viral envelope glycoproteins are of crucial importance for the elucidation of the as yet unknown, initial steps of virus-membrane fusion, virus penetration, and infection. Reconstituted Sendai virus envelopes have been shown to be as fusogenic as intact virions (1). Fusion of viral envelopes with cell membranes necessitated the presence of the two * This work was supported by a grant from the National Council for Research and Development, Jerusalem, Israel, and by a grant from the Gesellschaft fur Strahlung Forschung, Munich, Federal Republic of Germany. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. viral envelope glycoproteins, namely the hemagglutinin/neuraminidase and the fusion polypeptides (1, 2). In Sendai virus that belongs to the paramyxovirus group, the hemagglutinin binding activity and the neuraminidase are located on the same polypeptide, the hemagglutinin/neuraminidase glycoprotein, thus preventing studies on the function of the neuraminidase itself in the membrane fusion process (2). On the other hand, in influenza that belongs to the orthomyxovirus group, these activities are located on two different glycoproteins, the hemagglutinin and the neuraminidase (3). The hemagglutinin glycoprotein mediates binding to cell surface receptors and is required for promoting virusmembrane fusion (3). However, the question of whether the neuraminidase glycoprotein also participates in the fusion process is still debatable. From studies using reconstituted viral envelopes, Huang et al. (4) have suggested that the neuraminidase glycoprotein is required for allowing virusmembrane fusion. Reconstituted envelopes bearing both glycoproteins, namely the hemagglutinin and neuraminidase, were fusogenic, whereas those bearing only the hemagglutinin glycoprotein were inactive (4). Recently,White et al. ( 5 ) have transformed simian CV-1 cells with plasmids containing the gene for the influenza hemagglutinin glycoprotein. Transformed cells which expressed the hemagglutinin glycoprotein underwent a process of cell-cell fusion at low pH values. These experiments clearly showed that the hemagglutinin glycoprotein by itself is sufficient for promoting membrane fusion. However, the possibility that the process of cell-cell fusion is different from virus-membrane fusion cannot be excluded. Furthermore, such transformed cells may possess a low level of endogenous neuraminidase activity, allowing fusion in the presence of the hemagglutinin glycoprotein only. In the present study, we have prepared membrane vesicles bearing only the influenza virus hemagglutinin glycoprotein. Hemagglutinin vesicles prepared by this method possessed hemolytic as well as fusogenic activity comparable to that expressed by intact virions. Using a fluorescence dequenching method, we have shown that such hemagglutinin vesicles are able to fuse with erythrocyte membranes and living cultured cells, as well as with phospholipid vesicles lacking virus receptors. Virus-Influenza (&P& strain) was isolated from the allantoic fluid of fertilized chicken eggs (6). The viral hemagglutinating units were determined essentially as previously described (7). Cells-Human blood, type 0, Rh', recently outdated, was washed three times in PBS, pH 7.4, and the final pellet obtained was suspended in PBS to give 40% (v/v). The washed erythrocytes were desialized by treatment with neuraminidase, as described before (8). Human erythrocyte ghosts (HEG) were obtained following hemolysis of the human erythrocytes with 40 volumes of 5 mM phosphate buffer, pH 8.0 (9). After three washings with the same buffer, the final pellet of white HEG was suspended in PBS, pH 7.4, to give 4 mg of protein/ ml. Mouse lymphoma S-49 cells were grown in DMEM medium + 10% horse serum, as described before (10). Prior to use, the cells were washed twice with DMEM without serum. Preparation of Reconstituted Hemagglutinin Vesicles-Influenza virions (10 mg) were solubilized by 2% (w/v) of Triton X-100 in a final volume of 1 ml of PBS, pH 7.4, as described before for Sendai virions (11). The viral glycoproteins, present in the clear supernatant obtained after removal of the viral nucleocapsid (100,000 X g, 30 rnin), were separated by a column of agarose-sulfanilic acid according to Basch et al. (12), with the following modifications. (a) Sulfanilic acid was coupled to agarose beads, containing covalently attached tyrosine (Sigma), according to Huang (13). Following the coupling process, the agarose beads were introduced into a short column (0.5-1 cm in diameter) to give a packed volume of 3 ml. The column was washed first with buffer A (50 mM Tricine-NaOH, pH 6.8, 20 mM CaCl?, and 2% Triton X-loo), and then with a solution containing 800 pg of phospholipids (PC:cholesterol, 2:1, mol/mol, 2 mg/ml) in the above buffer A. (b) In order to equilibrate with buffer A, the detergent-solubilized viral envelopes were dialized for 3 h at 4 "C against this buffer and then applied to the agarose-sulfanilic acid column. (c) Following introduction of the solubilized viral envelopes, 6 fractions of 2 ml were collected. The column was washed with 10 ml of the same buffer and then with a buffer containing 0.1 M Na2C03/ NaHC03, pH 9.1, and 2% Triton X-100. Fractions of 2 ml were collected and their pH was immediately adjusted to 6.5-7.0 with 0.1 M acetic acid. The detergent (Triton X-100) was removed from the various fractions by direct addition of SM-2 Bio-beads, keeping the ratio of Bio-beads:Triton X-100 (w/w) at 7:1, as described before (11). Briefly, 280 mg of methanol-washed SM-2 Bio-beads were added to each of the three 2-ml fractions. After 2-3 h of incubation at 20 "C with vigorous shaking, an additional portion of 280 mg of SM-2 Biobeads was added. Following an additional incubation period of 12-14 h, the turbid suspension obtained was separated from the Bio-beads and centrifuged (100,000 X g, 30 rnin). The pellet obtained was suspended in 300-500 p1 of PBS, pH 7.4. Electrophoresis analysis revealed that the hemagglutinin glycoprotein was present mainly in the second and third fractions of the flow-through of the first buffer. The neuraminidase glycoprotein was present mainly in the second and third fractions of the second buffer, and it was found to be contaminated with the hemagglutinin glycoprotein. About 10% of the viral protein was recovered in the various fractions following removal of the Triton X-100 by SM-2 Bio-beads. The amount of Triton X-100 remaining in the hemagglutinin vesicles was found to be 0.025-0.033% (w/v) by the use of ["]Triton X-100. Preparation of Fluorescently Labeled, Intact Influenza Virions or Hemagglutinin Vesicles-Intact virions or hemagglutinin vesicles were labeled with Rls, essentially as described before for Sendai virus (14, 15). Briefly, 2-3 p1 of 1.25 mg/ml of ethanolic solution of Rla were rapidly injected into 250 pl of PBS, pH 7.4, containing 400 pg of viral protein. After 15 min of incubation at room temperature in the dark, the viral preparations were washed with 60 volumes of PBS (Eppendorf centrifuge, 15 min). Under such conditions, the Rls was inserted into the viral membranes at self-quenching surface density (about 3 mol % of total viral phospholipids), and its decrease was shown to be proportional to the fluorescence dequenching. Fluorescence Measurements-Fluorescent influenza virions or hemagglutinin vesicles (5 pg of each) were incubated with HEG, liposomes, or living cultured cells, in a final volume of 200 pl of PBS, pH 7.4. Following 10 min of incubation at 4 "C, the pH of the medium was adjusted to the desired value by the addition of 50 pl of sodium acetate (0.5 M), and the suspension obtained was then incubated at 37 "C. At the end of the incubation period, a volume of 1 ml of PBS, pH 7.4, was added to the reaction mixture, and the degree of fluorescence (excitation at 560 nm, emission at 590 nm) of each sample was estimated before and after solubilization with 0.1% Triton X-100. The extent of fluorescence obtained in the presence of the detergent was considered to represent 100% dequenching, i.e. infinite dilution of the probe (14,15). All fluorescence measurements were carried out with a Perkin-Elmer MFP-4 spectrofluorimeter. Virus preparations were also incubated under the same experimental conditions, in the absence of recipient membranes. The extent of fluorescence dequenching was calculated as described before (17). Determination of the Degree of Hemolysis-Various amounts of either intact virions or hemagglutinin vesicles were incubated with 2.5% (v/v) human erythrocytes for 10 min at 4 "C, in a final volume of 800 pI at pH 7.4. At the end of the incubation period, 200 pl of 0.5 M sodium acetate, pH 5.0, were added, and the suspension was further incubated for 15 min at 37 "C. A t the end of the incubation period, the degree of hemolysis was estimated at 540 nm, as previously described (7). All the experiments described in the present work have been repeated at least three times. However, the data given represent results from one individual experiment. Quantitative differences between independent experiments never exceeded *5%. Protein and Lipid Determinations-Protein was determined by the method of Lowry et al. (18) with bovine serum albumin as a standard. Lipid concentration was estimated by the method of Stewart (19) with PC as a standard. RESULTS Characterization and Hemolytic Activity of the Hemagglutinin Vesicles-The gel electrophoresis pattern seen in Fig. lA shows that the hemagglutinin vesicles obtained by the present method contain only the hemagglutinin glycoprotein. Under reducing conditions, this glycoprotein appears as the hemagglutinin 1 and hemagglutinin 2 polypeptides (Fig. lA, c). Neither the neuraminidase glycoprotein which is present in 1. Gel electrophoresis analysis and electron microscopic observations of influenza hemagglutinin vesicles. In A, intact influenza virions (aj, RIVE (b) (50 pg of protein each), hemagglutinin vesicles (c), and neuraminidase vesicles ( d ) (15 pg of protein each) were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (15% acrylamide) according to Laemmli (31). RIVE were prepared as described before for the preparation of reconstituted Sendai virus envelopes (11). The following viral polypeptides can be identified NA, neuraminidase; NP, nucleoprotein; HA,, hemagglutinin 1; HA2, hemagglutinin 2; M, matrix protein. In B, hemagglutinin vesicles were negatively stained with phosphotungstic acid, and prepared for electron microscopy as previously described (11). Magnification X 21,800. C, enlargement of the hemagglutinin vesicles seen in the middle of B (X 54,600). the reconstituted influenza virus envelopes (RIVE) (Fig. lA, b) nor the viral nucleoprotein or matrix protein polypeptides can be detected in the hemagglutinin fraction. On the other hand, the neuraminidase fraction ( Fig. l A , d ) is contaminated by the hemagglutinin glycoprotein. Electron microscopic observations revealed that the hemagglutinin fraction consists of closed membrane vesicles with spikes extending from their surfaces (arrows in Fig. lB), thus resembling envelopes of intact virions. When identical amounts of viral proteins were studied, it has been observed that the hemagglutinin vesicles possessed hemolytic activity with a degree slightly higher than that expressed by intact virions or RIVE (Table I, see also Fig. 2 ) . Addition of a large excess of external phospholipids greatly inhibited the hemagglutinin hemolytic activity, resulting in the formation of non-hemolytic hemagglutinin vesicles at a protein:lipid molar ratio of 1:74 (Table I). Electron microscopic observations revealed that most of the hemagglutinin vesicles formed in the presence of a large excess of phospholipids showed the appearance of viral spikes on their surface (not shown). These preliminary observations confirm previous results (29) showing that, even upon addition of a large excess of phospholipids, all the vesicles formed contained influenza viral glycoproteins. However, the possibility that the hemagglutinin vesicle preparations contained vesicles with a varied 1ipid:protein ratio cannot be excluded. The hemagglutinin vesicles were practically devoid of any TABLE I Influenza hemagglutinin vesicles: Characterization and hemolytic activity RIVE, hemagglutinin (HA), and neuraminidase (NA) vesicles were prepared and protein:lipid ratios were determined as described under "Materials and Methods" and in the legend to Fig. 1. For determination of the viral neuraminidase activity, 25 Mg of the virus preparation were incubated for 30 min at 37 "C with 1 mM N-acetylneuramine lactose at pH 5.6, in a final volume of 200 pl of a buffer containing 50 mM sodium acetate, pH 5.6, 154 mM NaCI, and 6 mM CaCI2. At the end of the incubation period, free sialic acid was determined according to Warren (30). The viral neuraminidase activity is express in milliunits/mg of viralprotein, where 1 unit of enzyme is the amount of enzyme which hydrolyzes 1 I.~M N-acetylneuramine lactose to N-acetylneuraminic acid in 1 min at 37 "C, pH 5.6. For with human erythrocytes (2.5%, v/v) as described under "Materials induction of hemolysis, 1 pg of each viral preparation was incubated and Methods." External phospholipids (PL) (2 mg/ml of PC:cholesterol, 2:1, mol/mol in 2% Triton X-100) were added to the detergent-solubilized hemagglutinin fractions (2-ml volumes, containing 1-2 mg of hemagglutinin glycoprotein in 2% Triton X-100). All subsequent steps of removal of Triton X-100 by the direct addition of SM-2 Bio-beads were as described under "Materials and Methods." Essentially the same results were obtained from several separate experiments. The data given are results obtained from one specific experiment. In other experiments, some residual activity of neuraminidase could be detected in the hemagglutinin fraction; however, it never exceeded 5 milliunits/mg of viral protein. neuraminidase activity (see also Fig. hi). Conversely, a high degree of neuraminidase activity and a negligible degree of hemolytic activity were detected in the neuraminidase vesicles ( Table I). The hemolytic activity of the hemagglutinin vesicles was dependent upon the amount of protein used (Fig. 2, A and C). Similar to the activity of intact virions, that of the hemagglutinin vesicles was expressed only between pH 5.0 and 5.5 (Fig. 2B). Very little hemolysis, if any, was induced by the hemagglutinin vesicles at pH 6.0 and above. Fusion of Hemagglutinin Vesicles with Human Erythrocyte Ghosts and Phospholipid Vesicles--It has been well established that fusion of fluorescently labeled enveloped virions with recipient membranes results in fluorescence dequenching (14, 15). Indeed, the results in Fig. 3A show that incubation of fluorescently labeled hemagglutinin vesicles with human erythrocyte ghosts resulted in fluorescence dequenching, the extent of which was dependent on the amount of the erythrocyte membranes present in the incubation mixture. Very little increase in the fluorescence dequenching was observed upon incubation at pH 7.4, whereas a significant increase was obtained at pH 5.0 (Fig. 3 B ) . The pH profile of the fluorescence dequenching was similar to that observed for the virus and vesicle hemolytic activities (Fig. 2B). A high increase in the degree of fluorescence was obtained between pH 5.0 and 5.5. It was also clear from the results in Fig. 3 ( A and B ) that the hemagglutinin vesicles behaved exactly as intact virions in their ability to undergo fluorescence dequenching. Support for the view that the fluorescence dequenching observed reflects a process of membrane fusion was obtained from the results summarized in Fig. 4. Very little fluorescence dequenching was obtained upon incubation of hemagglutinin vesicles or intact virions, either at pH 5.0 or pH 7.4, with neuraminidase-or glutaraldehyde-treated HEG. Treatment with neuraminidase removes membrane sialic acid residues which are known to serve as receptors for influenza virus particles (3). In addition, glutaraldehyde-treated membranes are resistant to virus-membrane fusion but allow lipid-lipid exchange processes (20). As opposed to interaction with biological membranes such as HEG, fusion of enveloped viruses with phospholipid vesi- (a), as described in the legend to Fig. 3 and under "Materials and Methods." cles does not require the presence of specific virus receptors in the lipid bilayer (14,16,21). Indeed, similar to intact virions, the hemagglutinin vesicles fused at pH 5.0, but not at pH 7.4, with liposomes composed of PC:cholesterol and lacking virus receptors (Fig. 5). This was inferred from the increase in fluorescence dequenching which was observed upon incubation with the PC:cholesterol liposomes (Fig. 5). The presence of sialoglycolipids (gangliosides) in the liposome All other experimental conditions and determination of fluorescence dequenching were as described for interactions with HEG in the legend to Fig. 3 and under "Materials and Methods." Chol, cholesterol; Gang, gangliosides. bilayer further increased the extent of fluorescence dequenching obtained following incubation with fluorescently labeled hemagglutinin vesicles (Fig. 5). Fusion of hemagglutinin vesicles, similarly to fusion of intact virions, required the presence of cholesterol in the liposomes. A low extent of fluorescence dequenching was obtained at either pH 5.0 or 7.4 with liposomes composed of only PC (Fig. 5). Treatment of intact influenza virions with hydroxylamine or glutaraldehyde or with incubation a t 85 "C or at low pH values inactivates their hemolytic and fusogenic activities (22, 23). It has been well established that virus-induced hemolysis reflects a process of virus-membrane fusion (24). The same treatment rendered the hemagglutinin vesicles non-hemolytic (Table 11), indicating that hemolysis induced by these vesicles is due to the activity of the viral hemagglutinin glycoprotein. These results show that the fluorescence dequenching observed upon incubation with HEG or with liposomes, results from the same process, namely fusion between the hemagglutinin vesicles and the recipient membranes. From the results in Table I1 it is also clear that the nonhemolytic, fluorescently labeled hemagglutinin vesicles as well as the intact virions failed to undergo a process of fluorescence dequenching upon incubation with either HEG or with liposomes composed of PC: cholesterol or PC:choles-tero1:gangliosides. Fusion of Vesicles with Living Cultured Cells-The results in Fig. 6A show that incubation of fluorescently labeled hemagglutinin vesicles with living cultured cells such as mouse lymphoma S-49 also resulted in fluorescence dequenching. An increase in the degree of fluorescence dequenching was observed following incubation at pH 5.0, as well as at pH 7.4. This is in contrast to the observation with HEG or with liposomes with which a high degree of fluorescence dequenching was observed only at pH 5.0. Very little fluorescence dequenching was observed upon incubation of non-hemolytic, treated hemagglutinin vesicles with lymphoma cells, indicating that the fluorescence dequenching observed at both pH values is due to fusion of the hemagglutinin vesicles with the recipient cells. The results in Fig. 6B and Table I11 show that addition of lysosomotropic agents such as methylamine and ammonium chloride (3) greatly reduced the fluorescence dequenching observed upon incubation of hemagglutinin vesicles with lymphoma cells at pH 7.4, but had practically no effect on that observed at pH 5.0. This may indicate that the fluorescence dequenching observed at pH 7.4 is due to fusion with membranes of intracellular organelles such as endosomes or lyso- Inactivation of the influenza hemagglutinin glycoprotein fusogenic activity For inactivation of the intact influenza virions or hemagglutinin (HA) vesicles' fusogenic activity, 300 pg of viral protein in 200 p1 were treated as follows: For heat and glutaraldehyde inactivation, a virus suspension in PBS, pH 7.4, was incubated at 85 "C for 30 min and in 0.1% glutaraldehyde for 30 min at 37 "C, respectively. Inactivation by low pH was performed essentially as described before (22) by incubating a virus suspension in sodium acetate (0.5 M, pH 5.0) for 30 min at 37 "C. For treatment with NHzOH, a virus suspension in 1 M NHZOH, pH 6.5, was incubated for 30 min at 37 "C as previously described (23). At the end of the incubation, the virus in the various systems was washed twice with 10 volumes of PBS, pH 7.4, resuspended in 200 p1 of PBS, and labeled with Rls. Chol, cholesterol; gang, gangliosides. Table II), were incubated with mouse lymphoma S-49 cells (5 X lo6 cells) in a final volume of 200 pl of DMEM, at pH 7.4 (without serum), for 10 min, after which 50 p1 of sodium acetate (0.5 M), adjusted to pH 5.0 (0) or pH 7.4 (R), were added. Following 30 min of incubation at 37 "C, the extent of fluorescence dequenching was determined. In B, hemagglutinin vesicles were incubated with mouse lymphoma S-49 cells before ( a ) or after ( b ) the cells were treated with methylamine, at either pH 5.0 (0) or pH 7.4 (R). Mouse lymphoma S-49 cells, suspended in DMEM (without serum), at pH 7.5 were incubated for 30 min at 37 "C with 50 mM methylamine. After two washings with 10 volumes of the same medium, the cells were suspended in the same medium which contained 50 mM methylamine to give 2.5 X lo7 cells/ml. Hemagglutinin vesicles were then incubated with the treated cells, as described for A . Interaction of hemagglutinin vesicles with lymphoma S-49 celk: Effect of lysosomotropic agents and inhibitors of endocytosis Mouse lymphoma S-49 cells were treated with methylamine (NHzCHs), ammonium chloride (NH,CI), sodium azide (NaN3) (50 mM each) or EDTA (5 mM) as described for methylamine in the legend to Fig. 6B. All other experimental conditions were as described in the legend to Fig. 6B somes, whereas that at pH 5.0 is due to fusion with the cell plasma membrane (3). Further support for this view was obtained from the experiments (Table 111) showing that incubation of hemagglutinin vesicles with cells incubated with inhibitors of endocytosis such as EDTA or NaN3 (25) also resulted in a low degree of fluorescence dequenching. DISCUSSION Reconstituted envelopes of fusogenic virions are not only an excellent tool for the elucidation of the molecular mechanism of virus-membrane interaction and fusion but also can be used as a vehicle for microinjection of macromolecules into living cells (1). However, in order to serve as an efficient biological carrier, reconstituted envelopes should possess a high fusogenic activity comparable to that expressed by intact virions. The results of this work clearly show that the hemagglutinin vesicles prepared by the present method are highly hemolytic and fusogenic. Based on quantitative measurements, it appears that the fusogenic activity of the present hemagglutinin vesicles is high, indicating that very little inactivation occurs during their isolation and reconstitution. An apparent 50% reduction in the hemolytic activity of the hemagglutinin vesicles was calculated when its activity was compared to that of the hemagglutinin glycoprotein present in intact virions (Fig. 2C). Our calculations were based on the assumption that the hemagglutinin glycoproteins consist of about 20% of the total viral proteins (32). However, it is noteworthy that the hemagglutinin vesicles were formed by reconstitution and, therefore, it is expected that about 50% of their glycoproteins will be facing the intravesicular space. Taking this into consideration, a 50% reduction in the hemolytic activity is not surprising. From our results, it appears that the hemagglutinin vesicles contained only the hemagglutinin glycoprotein and were practically devoid of the neuraminidase glycoprotein and neuraminidase activity. (3). In our work, we have solubilized and treated influenza virions with Triton X-100 in the same manner as we used this detergent for the preparation of highly fusogenic, reconstituted Sendai virus envelopes (11). Previously (14), it has been shown that treatment of Sendai virions with octyl glucoside caused inactivation of their hemolytic and fusogenic activities. We have found that solubilization of influenza virions with octyl glucoside instead of Triton X-100 caused the complete inactivation of the viral fusogenic activity (not shown). Electron microscopic and gel electrophoresis studies revealed that the RIVE prepared by the use of octyl glucoside were identical to those obtained using Triton X-100; namely, they contained a high amount of viral spikes and were composed of the hemagglutinin and neuraminidase glycoproteins. The use of Triton X-100 was avoided by other groups because it is difficult to remove by conventional methods due to its low critical micellar concentration (28). However, it seems that, by the method previously described (11) (that is, by the direct addition of SM-2 Biobeads to the detergent-solubilized virus envelopes), most of the Triton X-100 can be removed. Our results also demonstrated that the hemolytic and fusogenic activities of the hemagglutinin vesicles, similarly to those of intact virions, were expressed only at low pH and were inhibited by conditions which destroy fusogenic activity. This clearly proves that hemolysis and fusion were induced by the viral hemagglutinin glycoprotein and not by the residual amounts of detergent left in this preparation. In a previous attempt to prepare hemagglutinin vesicles (4, 12), the influenza envelope glycoproteins were applied to an agarose-sulfanilic acid column at pH 5.5. It has been well established that incubation of influenza virus glycoproteins at such low pH causes the rapid inactivation of its biological activity (22). To avoid such an inactivation, we have solubilized influenza virions by Triton X-100 in a medium whose pH was kept at 6.8. The separation of the viral glycoproteins by the agarose column was performed at the same pH. Furthermore, in experiments performed in our laboratory, we have shown that the agarose-sulfanilic acid column used for the separation of the influenza hemagglutinin and neuraminidase glycoproteins absorbs a large percentage of the virus envelope phospholipids (not shown). A certain amount of phospholipids is required to allow expression of the fusogenic activity of the viral glycoprotein (29). Therefore, in our experiments, the column was washed with a detergent solution containing lipid molecules, thus saturating it with external lipids. Based on previous observations (14, 15), it should be inferred that the fluorescence dequenching observed reflects a process of virus-membrane fusion. Our results show that incubation of hemagglutinin vesicles with glutaraldehydetreated erythrocyte ghosts or, conversely, incubation of glutaraldehyde-treated virions with non-treated erythrocyte ghosts resulted in a very low degree of fluorescence dequenching. This further supports the view that the fluorescence dequenching observed is due to membrane fusion and not to lipid-lipid exchange processes. The correlation between the fusion processes and fluorescence dequenching was further emphasized by the observation that treatment of the hemagglutinin vesicles as well as intact virions with hydroxylamine, or preincubation at 85 "C or low pH, significantly reduced the ability of the hemagglutinin vesicles to undergo a process of fluorescence dequenching. All of these treatments have been shown to affect the fusogenic activity of the influenza virions (22, 23). Fusion of the hemagglutinin vesicles with living cultured cells-as opposed to fusion with erythrocyte ghosts or with liposomes-was observed not only at pH 5.0 but also at pH 7.4. It is conceivable that, at physiological pH, the hemagglutinin vesicles, similarly to intact virions, are taken into intracellular organelles such as endosomes or lysosomes by endocytic activity (3). Subsequent to endocytosis, the hemagglu-tinin vesicles fuse with membranes of intracellular organelles whose pH was found to be as low as 5.0 (3). This assumption is supported by the results showing that the fluorescence dequenching observed at pH 7.4, but not at pH 5.0, was inhibited by the lysosomotropic agents methylamine or ammonium chloride. Also EDTA and NaN,, which are known to inhibit endocytosis (25), strongly suppressed the fluorescence dequenching observed at pH 7.4 but not at pH 5.0. Thus, in their fusogenic ability, the hemagglutinin vesicles, prepared by the method described in the present work, behave in the same manner as intact virions, despite the fact that they are devoid of any neuraminidase activity.
6,756.4
1987-10-05T00:00:00.000
[ "Biology", "Chemistry" ]
2D semiconductor nonlinear plasmonic modulators A plasmonic modulator is a device that controls the amplitude or phase of propagating plasmons. In a pure plasmonic modulator, the presence or absence of a plasmonic pump wave controls the amplitude of a plasmonic probe wave through a channel. This control has to be mediated by an interaction between disparate plasmonic waves, typically requiring the integration of a nonlinear material. In this work, we demonstrate a 2D semiconductor nonlinear plasmonic modulator based on a WSe2 monolayer integrated on top of a lithographically defined metallic waveguide. We utilize the strong interaction between the surface plasmon polaritons (SPPs) and excitons in the WSe2 to give a 73 % change in transmission through the device. We demonstrate control of the propagating SPPs using both optical and SPP pumps, realizing a 2D semiconductor nonlinear plasmonic modulator, with an ultrafast response time of 290 fs. P lasmonic modulators have been highly sought after for approaches to optical frequency information processing devices [1][2][3][4][5] . Optical frequency plasmonic devices offer potential advantages over electronic devices due to the high carrier frequency of optical waves, as well as the potential to use ultrafast solid-state nonlinearities for sub-picosecond switching times. Furthermore, by using plasmonic structures, optical frequency waves can be confined to sub-free-space wavelength waveguides allowing for miniaturization of on-chip optical devices 1 . Early plasmonic modulators demonstrated modulation using quantum dots 3 , photochromic molecules 6 as integrated nonlinear materials, but were limited by slow (>40 ns and 10 s) switching times 3,6 . Ultrafast (~200 fs) plasmonic modulation was demonstrated with a modulation depth of 7.5% (~0.3 dB) by direct optical pumping of bare metallic plasmonic waveguides 5 , but required the use of high~90 nJ pump pulse energy. The current state-of-the-art plasmonic modulators based on traditional bulk nonlinear materials and nanoplasmonic resonator/interferometer structures typically can achieve modulation depths on order of 1-10 dB μm −1 with response times >2 ps and require~3-20 pJ of pulse energy to operate [7][8][9] . Recently, graphene-based all-optical 10 and plasmonic 11,12 modulators have been studied, and compared favorably to previous modulators in terms of switching speed and energy 11 . In 2018, a graphene-based plasmonic modulator achieved 0.2 dB μm −1 modulation depth with a switching energy of 155 fJ and a response time of 2.2 ps using a deep subwavelength plasmonic waveguide 12 . This progress motivates the investigation of other atomically thin materials in plasmonic modulator structures, that have the potential to achieve faster response times, lower switching energies and larger modulation depths. In free space optical measurements, monolayer WSe 2 and other semiconducting TMDs are known to exhibit large light-matter interactions and large third-order nonlinear optical susceptibilities near their exciton resonance [13][14][15][16][17][18][19] . Recently, there has been significant interest in using monolayer TMDs for plasmonic applications including the demonstration of SPPs coupling to dark excitons in monolayer WSe 2 20,21 , increasing the nonlinear response using localized plasmonic effects [22][23][24] , and enhancement of single quantum emitter emission rates 25,26 In this work, we investigate the 2D semiconductor-plasmonic structure as depicted in Fig. 1a in order to understand the fundamental exciton-SPP interactions and to demonstrate their promise for active plasmonic devices. Our results rely on the atomically thin nature of the TMD and the surface confined SPP mode to realize an attractive geometry where the active layer is near the maximum amplitude of the SPP mode. Furthermore, we develop a novel, self-consistent theory of exciton-SPP (E-SPP) coupling that is unique to the 2D layer geometry and includes a complete E-SPP dispersion relation for arbitrary distances between the metal surface and the TMD layer. We show that our E-SPP model is highly predictive for both the linear (transmission) and nonlinear (differential transmission) response. We take advantage of the fast nonlinear optical response of 2D semiconductor excitons to realize an ultra-low switching energy plasmonic modulator with a modulation depth of at least 4.1% in continuous wave (CW) measurements, limited by the pump power used in the measurement. Our time-domain measurements reveal a fast (slow) component of the nonlinear response with a response time of 290 fs (13.7 ps). Results Fabrication of E-SPP device and linear response. Monolayer WSe 2 was integrated on top of the metallic waveguide structures to serve as a nonlinear active layer. Monolayer WSe 2 was isolated through mechanical exfoliation from high quality bulk crystals. The WSe 2 thickness was confirmed by photoluminescence. In order to electrically isolate the WSe 2 from the metallic waveguide (to avoid quenching of excitons), it was encapsulated with hBN. The hBN-WSe 2 -hBN heterostructures were fabricated and transferred onto the waveguides using a polymer based dry transfer technique (polycarbonate film on polydimethylsiloxane, PDMS, stamp) 27 . The transfer was performed under a microscope based probe station to allow for alignment of the 2D heterostructure and waveguide. All of the CW measurements in the main text were performed on the same sample with 5 nm top and The hybrid hBN-WSe 2 -hBN/plasmonic structures were measured at 4.5 K (CW measurements) and 11 K (time-domain measurements) in a closed-cycle optical cryostat to reduce thermal broadening effects. The transmission spectra and CW nonlinear measurements were measured using two tunable Ti: sapphire continuous wave lasers (M Squared SolsTiS). The laser was focused to a diffraction limited spot on the input grating coupler. Light scattered from the output grating coupler was isolated using a spatial filter and measured with a silicon photodiode. In the linear transmission measurements, the probe laser was modulated for lock-in detection. In the nonlinear spectroscopy measurements, pump and probe beams were amplitude modulated at different frequencies near 500 kHz to allow for lock-in detection at the modulation difference frequency. The time-domain pump-probe measurements were performed with a tunable mode-locked Ti:sapphire laser with a repetition rate of 76 MHz and pulse width of~120 fs. The SPP transmission spectrum is shown in Fig. 1c for 60 µW input power. The black data show the transmission spectrum for the hybrid hBN-WSe 2 -hBN/plasmonic structure (with a~4-µmlong WSe 2 layer), and the red data show a reference bare waveguide. At the exciton resonance (1.737 eV, 713.6 nm), the transmission is reduced by~73% due to the presence of the WSe 2 layer, indicating a large interaction between SPPs and WSe 2 excitons. By comparing these transmission data to the photoluminescence spectrum (Fig. 1c inset), we can identify the dip in the SPP transmission as originating from the WSe 2 neutral exciton (X 0 ) 14 . We note that the center energy of the PL and SPP absorption response are aligned to within 1 meV, consistent with previous optical measurements on monolayer WSe 2 14 . Theory of E-SPP. In order to understand the coupling between SPPs and excitons, we use an extension of the well-known SPP dispersion (k x (ω)) that relates the wave vector component k x , where the axis x is shown in Fig. 1a, of a mode propagating along the surface to its energy ( hω) where ω is the SPP's angular frequency. Our approach complements other theories such as coupled oscillator (plexciton) 28,29 , scattering 30 , and gain-assisted SPP theories 4,31,32 . It is formulated for arbitrary distances between the metal surface and the TMD layer, and reduces to an expression calculated previously 33,34 in the limit of vanishing distance. Using the dielectric function of the metal ε m (ω) and the optical susceptibility χ(ω) of the TMD layer, we obtain their coupling directly by solving the dispersion relation, which is free of any fitting parameters. We use subscripts 1, 2, and 3 to denote the region above the TMD layer, between the metal surface and the TMD layer, and inside the metal, respectively (Supplementary Fig. 1). The dispersion relation of the coupled exciton surface plasmon polariton is obtained in a way that is analogous to deriving that of an SPP, i.e., looking for non-trivial solutions of Maxwell's equations that satisfy the continuity relations at the surface and decay away from it. The difference is the presence of the TMD layer, which requires additional continuity relations to be fulfilled, and there is no exponential decay in the region between the metal surface and the TMD layer. The coupling strength between the surface plasmon and the exciton in the TMD layer is governed by the factor e ÀImk 2z z ' where k 2z is the wave vector component normal to the surface in region 2, and z ' . As expected, the coupling is strong only if the layer is within the region of the evanescent surface mode. In the limit z ' ! 0, we obtain is the wave vector component perpendicular to the surface, the dielectric functions are ε 1 = ε 2 = 1 for the vacuum regions and ε 3 = ε m , and g ¼ 4πi ω 2 =c 2 À k 2 x À Á χðωÞ provides the coupling, in agreement with previous works 33,34 . The more general form where the distance is an arbitrary input parameter is given (Supplementary Note 1). We use a Drude model for the metal and a Lorentz model for the TMD exciton. This results in an E-SPP resonance at the exciton energy (1.737 eV) with the real and imaginary parts of k x (ω) shown in (gray and blue curves, Fig. 2a). The E-SPP group velocity is plotted (Supplementary Fig. 2). The peak value for Imk x of 0.07 µm −1 corresponds to an absorption length of 7.1 µm. Figure 2b shows the measured transmission as a function of the effective WSe 2 sample length for the three different structures we investigated ( Supplementary Fig. 3a-f). The effective WSe 2 sample lengths were estimated from the optical microscope images by calculating an average length over the central 3 µm of the waveguide corresponding the full-width half-max of the SPP spatial mode. An exponential fit to these data yields an effective decay length of 4.8 ± 0.6 µm, which is in good agreement (within a factor of two) of our theoretical model. Nonlinear E-SPP interactions. The transmission of the plasmonic device can be controlled by optically pumping the WSe 2 excitons, partially saturating the absorption. Figure 3a depicts the experimental configuration where SPPs propagating through the plasmonic structure serve as the probe, and a free space laser focused on the hBN-WSe 2 -hBN structure serves as an optical pump. Here, the focused optical pump beam diameter was chosen so that it illuminated nearly the entire WSe 2 region with an intensity of 8.5 × 10 6 Wm −2 . Figure 3b shows the DT/T spectrum as a function of probe wavelength, i.e., the pump-induced differential transmission (DT) normalized by the probe transmission (T). DT/T spectra for three different pump energies of 1.717 eV (red), 1.739 eV (black), and 1.746 (blue) are shown. When the pump laser is near resonance with the WSe 2 X 0 , the DT/T signal is maximized giving a peak value of DT/T = 4.1 × 10 −3 . Figure 3c shows the pump power dependence of the DT signal near the center of the exciton peak (1.739 eV pump, 1.743 nm probe). The DT signal is linear with pump power indicating that the DT response arises from the third-order nonlinear susceptibility. In order to demonstrate plasmonic modulation, we performed nonlinear measurements where both pump and probe lasers were coupled into the input grating, launching pump and probe SPPs (depicted in Fig. 4a). Figure 4b shows the CW DT/T spectra for three different pump-SPP energies 1.715 eV (red), 1.739 eV (black), and 1.771 eV (blue). We again observe a strong nonlinear response at the X 0 resonance, corresponding to a maximum DT/ T = 4.1 × 10 −2 . For this case, we see the DT/T amplitude increases by a factor of 10 over the optically pumped signal. The SPP pump power (of the SPP) dependence is also linear (Fig. 4c), consistent with a third-order nonlinear response. From the finite difference time-domain (FDTD) model, we find that the SPP pump intensity is 4.6 × 10 6 Wm −2 ,~2 times smaller than the optical pump case. In order to quantify the response time of the system, we carried out resonant time-domain pump-probe measurements. We used a single (~120 fs) pulsed laser tuned to resonance (1.736 eV, 714 nm) split into pump and probe, and a mechanical delay line to vary the pump-probe temporal separation. The time dependent DT/T response is shown in Fig. 4d. A biexponential is fit to the positive time signal resulting in fast 290 ± 20 fs and slower 13.7 ± 0.6 ps components to the decay time. We note that these decay times are 5-10 times faster than previously reported for monolayer WSe 2 on SiO 2 35,36 which is not surprising due to the coupling to the SPP mode. Figure 4e shows the DT response as a function of pump pulse energy (of the SPP). We note that the required pump pulse energy to achieve a DT/T ≈1% is 650 fJ. We also note that in SPP pump-SPP probe measurements both pump and probe lasers were detected simultaneously, which both contribute to the DT signal. To account for this, the transmission used to calculate DT/T is the sum of both pump and probe beams combined. To understand this plasmonic modulation effect and to estimate the order-of-magnitude of the third-order nonlinearity, we extended our linear analysis (Fig. 2a) to the third-order nonlinear response with perturbation theory. Since we only observe significant signal near 1.737 eV, we limit our model to inplane dipoles associated with the X 0 excitons. This reduces the susceptibility tensor to the single third-order component χ (3) (Supplementary Note 1). We assume that the pump-induced change in the susceptibility (Δχ) is proportional to the average pump intensity (I p ), i.e. Δχðω; I p Þ % 4π c I p χ ð3Þ ðωÞ. We can then use the linear dispersion relation with the replacement χ(ω) → χ (ω, I p ) = χ(ω) + Δχ(ω, I p ) which yields a pump-induced change in the dispersion, Δk x , and thus a measure of the pump-induced differential transmission DT/T. We deduce a value for Imχ (3) at the peak of the E-SPP resonance by using the experimental value for DT/T and the estimated average intensity. For both, optical pump/SPP probe and SPP pump/SPP probe we find the order of magnitude of Imχ (3) to be −10 −20 m 3 V −2 , in agreement with previously reported all-optical experiments 16,35 . We note this value is a 2D third-order susceptibility. To compare this value to a hypothetical 3D susceptibility, one must divide it by the monolayer thickness. Discussion In this work, we have investigated both the linear and nonlinear response of excitons interacting with propagating SPPs in metallic waveguide structures. We show that the linear absorption of SPPs can be very large, exceeding 73%. The large absorption and nonlinear response might be surprising considering that the outof-plane spatial extent of an SPP (~400 nm) is much larger than the TMD thickness (0.7 nm). However, our theoretical analysis is consistent with the measurements yielding an absorption coefficient on the order of 0.2 µm −1 . The key to the large linear absorption is the nanometer-scale proximity of the TMD layer to the metal surface, which allows for the active layer to be located near the maximum of the SPP mode. By performing both optical pump and SPP pump DT measurements, we demonstrate control of SPP propagation with a DT/T response exceeding 4%. The modulation depth per unit length achieved in our modulator (0.04 dB µm −1 ) is within an order of magnitude of other state-ofthe-art plasmonic modulators based on monolayer graphene 12 . We note that in both optical and SPP pumped measurements, the maximum pump powers we used were conservatively chosen to avoid sample damage. Since the DT signals are linear in pump intensity up to the highest pump powers used (Figs. 3c and 4c), the reported modulation depths should be taken as lower bounds on the achievable modulation depth. In principle, the modulation depth could be further enhanced by using longer TMD layers, stacking several TMD layers separated by hBN, or by decreasing the SPP mode size by depositing a high dielectric constant material on top of the structure. The modulation depth can also be increased by utilizing an interferometric or slot waveguide modulator structure 11,12 . Furthermore, our theory predicts that the modulation depth increases with decreasing detuning between the exciton and the SPP resonance. This plasmonic enhancement follows from the equation Δk x ¼ h enh ðωÞΔχðωÞ, where the nonlinear change in the complex-valued propagation vector Δk x , which governs directly measurable quantities such as DT/T / ImΔk x ð Þis related to the change in the susceptibility Δχ(ω). The plasmonic enhancement factor h enh (ω) is shown (Supplementary Fig. 4). Our model shows that increasing the exciton energy to 2.8 eV (with all other parameters unchanged) would increase Δk x by more than 2 orders of magnitude. To further quantify the performance of our modulator relative to previous works, we consider the response time and minimum energy needed to switch the modulator. Indeed the fast component of our response (290 fs) is comparable to the fastest previously reported plasmonic modulators 5 , but we achieve a similar modulation depth with a~10 5 times lower pump pulse energy. Compared to other state-of-the-art plasmonic modulators which typically require~3-20 pJ of pulse energy to operate, with response times >2 ps 7-9 , our demonstration of a 290-fs response time with 650 fJ pump pulse energy compares favorably. We note that assuming a Gaussian pulse, this response time corresponds to a modulation bandwidth~1.5 THz. We believe that future 2D semiconductor-plasmonic structures based on our reported nonlinear exciton-SPP plasmonic modulation effect could pave the way towards ultrafast plasmonic amplifiers and transistors with ultra-low switching energies. Methods Fabrication. The gold waveguide was fabricated on 285 nm SiO 2 /Si substrates by a two-step electron beam lithography process using an electron beam lithography system (100 kV Ellionix) and a spin coated poly(methyl methacrylate), PMMA, resist. In the first step, 200 nm gold was (electron beam) evaporated onto the substrate using 10 nm titanium sticking layer. In the second lithography step, PMMA was respun and the grating pattern was written and developed. We used an Ar + milling process to etch the grating couplers into the waveguide. The waveguides are 5 µm × 13 µm. The grating couplers are composed of five grooves that are 40 nm deep with a width of 110 nm and period of 570 nm. The bare waveguides were characterized using atomic force microscopy ( Supplementary Fig. 5a-d) and optical spectroscopy (Fig. 1c). The waveguide and grating coupler designs were optimized using an FDTD model. Simulations of the bare metallic structure show a maximum transmission of~4% at the exciton resonance ( Supplementary Fig. 6). We integrate a hexagonal boron nitride (hBN) encapsulated monolayer transition metal dichalcogenide (TMD) semiconductor, WSe 2 , on top of the waveguide (see Fig. 1a), where the interaction between SPPs and excitons in the WSe 2 provides the nonlinear response needed for modulation. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
4,508
2019-02-12T00:00:00.000
[ "Physics" ]
State-of-the-Art Review of Capabilities and Limitations of Polymer and Glass Fibers Used for Fiber-Reinforced Concrete The concrete industry has long been adding discrete fibers to cementitious materials to compensate for their (relatively) low tensile strengths and control possible cracks. Extensive past studies have identified effective strategies to mix and utilize the discrete fibers, but as the fiber material properties advance, so do the properties of the cementitious composites made with them. Thus, it is critical to have a state-of-the-art understanding of not only the effects of individual fiber types on various properties of concrete, but also how those properties are influenced by changing the fiber type. For this purpose, the current study provides a detailed review of the relevant literature pertaining to different fiber types considered for fiber-reinforced concrete (FRC) applications with a focus on their capabilities, limitations, common uses, and most recent advances. To achieve this goal, the main fiber properties that are influential on the characteristics of cementitious composites in the fresh and hardened states are first investigated. The study is then extended to the stability of the identified fibers in alkaline environments and how they bond with cementitious matrices. The effects of fiber type on the workability, pre- and post-peak mechanical properties, shrinkage, and extreme temperature resistance of the FRC are explored as well. In offering holistic comparisons, the outcome of this study provides a comprehensive guide to properly choose and utilize the benefits of fibers in concrete, facilitating an informed design of various FRC products. Introduction Conventional concrete is a brittle material by nature, with a (relatively) weak performance in tension. To alter this characteristic and avoid a sudden brittle failure of concrete structures, reinforcing materials are embedded into the concrete matrix. Since ancient times, people have been putting fibers, such as straws and hairs, in mortars and bricks to improve their tensile properties. These ancient and simple methods of concrete reinforcement have now been transformed into the advanced methods that involve using discontinuous fibers distributed randomly throughout the concrete matrix [1]. The resulting composite material is called fiber-reinforced cementitious composite, even though there are other names for concretes, mortars, or pastes that include fibers within their matrices. The current review study focuses mainly on the cementitious composites that include coarse aggregates, which are commonly referred to as fiber-reinforced concrete (FRC). The main fiber properties that govern the performance of FRC in the fresh and hardened states include fiber dimensions, elastic modulus, tensile strength, ultimate strain, and bonding and chemical compatibility with the matrix. Considering different fiber materials used in the current practice, the four main categories of metallic, glass, polymer, and natural fibers can be identified. As the name implies, metallic fibers refer to the fibers that are made from metals. The most common type of metallic fibers is steel fibers, but stainless-steel fibers have recently gained a growing attention because of high corrosion resistance. Glass fibers are broadly defined as the fibers that are derived from naturally occurring minerals or rocks. The two general types of glass materials used as fiber reinforcement in cementitious matrices are silica and basalt glass. Polymer fibers are considered as manmade fibers that are neither metallic nor glass fibers. A wide variety of synthetic polymer fibers are deemed suitable for application in FRC, including but not limited to polypropylene (PP), nylon, polyvinyl alcohol (PVA), polyolefin (PO), carbon, polyethylene (PE), polyester (PET), acrylic (PAN), and aramid. Natural fibers are the fibers that occur in nature within the organic tissue of plants. In this study, the main properties of FRC with various polymer and glass fibers are explored in detail, while an investigation of the effects of metallic and natural fibers is deferred to a separate study. FRC with Micro and Macrofibers The inclusion of fibers is known to affect various fresh and hardened properties of concrete, while the primary goal of including fibers in concrete is often to prevent (or control) the propagation of cracks. Cracking in concrete is a multiscale process, which starts at the micro-scale when cracks begin to form under applied stresses in the interfacial transition zones (ITZs) between the cement paste and aggregates. Such microcracks spread through the paste until they meet other microcracks, and eventually, grow into a large macrocrack [2]. Once a macrocrack has formed and the crack has widened past the stage of aggregate interlock, the concrete is left with close to no effective tensile load bearing capacity. This mechanism can form anywhere that a high tensile stress is present in the concrete with no reinforcement to prevent the tensile failure as the crack widens. Because of the multiscale progression of cracks in concrete, different sizes of fibers dispersed throughout the concrete matrix are deemed beneficial at different stages of crack growth. It is important to note that if coarse aggregates are taken out of FRC, the fresh and hardened properties of the composite change drastically due to the increased homogeneity of the matrix. The resulting fiber-reinforced mortars are often referred to as high-performance fiber-reinforced cementitious composites and the following micro/macrofiber discussions do not necessarily apply to them. There are a range of crack-inducing phenomena in concrete, including early-age plastic shrinkage cracks, which can be reduced (or even eliminated) by the inclusion of fibers at low dosages. The inclusion of fibers can also be beneficial for controlling the cracks after hardening. This is highly dependent on not only the fiber type and dosage, but also the dimensions of the fibers added to the cementitious matrix. Microfibers are low diameter, high aspect (length/diameter) ratio fibers that are most often less than 18 mm in length. Microfibers can be effective at arresting microcracks as they extend beyond the ITZ and propagate through the cement paste. This is primarily achieved by bridging the microcracks, facilitating the transfer of tensile stresses. Due to the micro-scale reinforcement action provided by microfibers, it is generally accepted that they most significantly affect the strength properties of FRC prior to full crack formation, as characterized by the stress-strain curve prior to the peak stress. Ultimate strength gains in both compression and tension have been reported for FRC with microfiber [3][4][5], depending on the fiber and matrix properties. Considering that concrete strength increases over time, even low-modulus microfibers can be effective at increasing the strength at an early age. For mature, and especially high-strength concrete, however, pre-crack strength increases are often obtained by using high-strength and high-modulus microfibers. Microfibers generally tend to have a more profound effect on the workability of concrete compared to macrofibers at equal volumetric dosages. This can be associated with the high surface area per unit volume that microfibers typically offer. In order to maintain workability, a sufficient paste volume is needed within the system to coat the additional surface area of the fibers. Alternatively, high dosages of superplasticizers can be utilized. As reflected in the literature, increasing the aspect ratio of the mixed fibers decreases the workability of concrete [6][7][8]. Macrofibers are characterized by their increased lengths (and reduced aspect ratios) compared to microfibers. There is no set standard to define the size boundaries between micro and macrofibers, which creates some overlap in the definitions. However, macrofibers are rarely shorter than 18 mm and generally have diameters larger than 0.1 mm. Macrofibers are effective at bridging the cracks in concrete once they have grown past the micro stage. This is because macrofibers are long enough to provide stress transfer across crack openings when a single crack has formed from the growth of microcracks. If the fiber-matrix bond condition is kept unchanged, the higher the elastic modulus of the macrofiber, the smaller the crack width under the same applied load. This feature relies on the existence of sufficient bond between the fibers and the matrix to develop the strength of the individual fibers and utilize their high stiffness. Besides reported exceptions for the fibers with a high modulus of elasticity [3,9], macrofibers do not significantly influence the strength parameters of concrete prior to crack formation. The effectiveness of macrofibers at bridging cracks depends on the maximum aggregate size as well. In general, for larger aggregates, longer fibers are more effective at improving the post-crack performance, while for smaller aggregates, shorter fibers can be equally (if not more) effective [10]. Due to the fact that microfibers are most effective at improving performance parameters at the early age (for example, by reducing plastic shrinkage cracks) and macrofibers are most effective for post-crack ductility and macrocrack control in FRC, the proper choice of fiber geometry is of the utmost importance for achieving the expected performance, depending on the target application. Fiber Material Types In this study, the properties of FRC containing different polymer and glass fibers are investigated with necessary details and comparisons. Tables 1 and 2 summarize the main characteristics of various polymer and glass fibers, respectively. Many types of polymer fibers have been used in FRC products with various outcomes, mainly due to the diversity of the chemical, physical, and dimensional properties of this category of fibers. The polymer fibers reviewed in this study consist of polypropylene (PP), nylon, polyvinyl alcohol (PVA), polyolefin (PO), carbon, polyethylene (PE), polyester (PET), acrylic (PAN), and aramid. On the other hand, the reviewed glass fibers primarily cover silica and basalt glass fibers. The listed fibers have been subject to adequate research and investigation to warrant their inclusion in the current study. Polypropylene (PP), with a chemical formula of (C 3 H 6 ) n [11], is a common type of concrete fiber, owing to its chemical stability in the alkalinity of concrete, wide availability, and low cost. The characteristics and behavior of PP fibers have been extensively explored in the literature [12][13][14][15]. In contrast to steel fibers, PP fibers have a relatively low tensile strength and modulus of elasticity, as listed in Table 1. Although new types of high-tenacity PP fibers have been developed with much higher strength and elastic modulus compared to traditional PP fibers, they are still low in strength and elastic modulus compared to other high-strength concrete fibers. Despite a low strength, the PP fiber is a highly ductile fiber, therefore, it can increase the toughness and impact resistance of concrete, especially at high strains. The PP fiber is one of the most cost-effective concrete fibers available from almost all concrete fiber suppliers. Nylon Fibers Nylon is a synthetic fiber with a chemical formula of (C 12 H 22 N 2 O 2 ) n that can have a range of strength properties dependent on the base polymer, manufacturing technique, and additives used to make it [16]. Although chemically different, nylon and PP fibers often deliver similar benefits when used in FRC because, in general, they have similar fibermatrix bond strengths, tensile strengths, and elastic moduli. Nylon fibers are, however, more expensive than PP fibers. Recent interest in recycled nylon fibers is expected to help decrease the cost of this type of fiber. Nylon fibers can be readily obtained from most concrete fiber suppliers. Polyvinyl Alcohol Fibers Polyvinyl alcohol (PVA), with a chemical formula of (C 2 H 4 O) n , is a relatively highstrength synthetic fiber originally developed for the replacement of asbestos in asbestos cement [17,18]. The use of PVA fibers has been expanded to FRC applications, owing to their satisfactory mechanical properties and ability to bond chemically with the cement matrix. However, PVA fibers are less common in practice since they are more expensive than most other concrete fibers. Although they are not widely available in the concrete fiber market, they can still be purchased from select suppliers. Polyolefin Fibers Polyolefin (PO) is a polymer fiber formed by the polymerization of olefin monomer units (C n H 2n ) that encompass polypropylene and polyethylene as subgroups [19]. For the purpose of this study, PO fibers are discussed separately due to their distinctions from other polymeric fibers in the FRC literature. PO concrete fibers share similar properties with high-tenacity PP fibers, as shown in Table 1. Because of the similarities between PO and PP fibers, characterized by low tensile strength, low elastic modulus, and high ultimate strain, the performance of FRC products made with these two fibers tends to be similar. It is also common for blended PP/PO copolymer resins to be manufactured into concrete macrofibers. PO fibers are provided by most concrete fiber suppliers, and they have a relatively low price. Carbon Fibers Carbon fiber has historically been one of the most popular types of fiber for reinforcing brittle matrix composites to improve their tensile properties [20]. The effectiveness of carbon fiber reinforcement in other types of matrices has sparked interest in using carbon fibers in FRC. Carbon fibers can have a wide range of mechanical properties, depending on the materials used to make the fibers. For example, polyacrylonitrile (PAN)-based carbon fibers have a very high tensile strength and elastic modulus (up to the twice of those of steel fibers), while pitch carbon fibers that are made from petroleum and coal tar pitch have a relatively low tensile strength and elastic modulus. Pitch carbon fibers often exhibit a wide range of tensile strengths and elastic moduli, depending on the nature of the pitch used to make them [8]. The properties of the fibers can vary considerably depending on the manufacturing process as well. Both types of carbon fiber are made from varying degrees of heat treatment, stretching, and oxidation [18]. Carbon fibers are expensive compared to other fiber choices, which limit their use in civil infrastructure applications. Polyethylene Fibers Polyethylene (PE) fiber, with a chemical formula of (C 2 H 4 ) n , can be produced with a wide range of mechanical properties. In the past, PE fibers were being characterized by low strength and elastic modulus, similar to PP and PO fibers. However, the development of ultra-high density PE has greatly increased the strength and stiffness of this type of fiber. From the performance perspective, it can be generally stated that the higher the fiber density and molecular weight, the higher the strength and stiffness potential. These fiber properties depend on the degree of molecular alignment achieved by advanced production processes, involving heat pressure and catalysts [21]. High-strength polyethylene (HSPE) is a type of PE fiber made from gel-spinning ultra-high molecular weight PE. The tensile strength and modulus of elasticity of HSPE are higher than those of other polymeric fibers, as shown in Table 1. HSPE is a high-performance product, thus, it is expensive to buy directly from the manufacturer. However, waste HSPE fibers can be obtained from third-party distributors for a low price. Polyester Fibers Polyester fibers generally fall under two categories, i.e., polyethylene terephthalate (PET) and poly(1,4-cyclohexylene dimethylene terephthalate) (PCDT). The PET and PCDT fibers are made using different processes and have different chemical and mechanical properties. PET fibers often have a higher strength and stiffness than PCDT fibers, which are characterized as more ductile. With reference to use as concrete fibers, contrary to PCDT fibers, PET fibers have been subject to extensive research, mostly as fibers recycled from consumer products. Henceforth, the polyester fibers refer to the PET variety. It must be noted that, although PE and PET share the polyethylene name, they are chemically different, as PET is a polyester, not a type of polyethylene. Acrylic Fibers Acrylic is a polymer that contains at least 85% acrylonitrile by weight [22], with a chemical formula of (C 3 H 3 N) n . The name "acrylic" is short form for, and essentially interchangeable with, polyacrylonitrile (PAN). As previously mentioned, PAN fiber is also the precursor material used to manufacture PAN-based carbon fiber. Acrylic fibers with a high tensile strength and elastic modulus were employed in the form of small-diameter short-cut fibers to replace carcinogenic asbestos [16]. As listed in Table 1, there are a wide range of strength and stiffness parameters that PAN fibers can possess. Due to this variation, the properties of FRC with this type of fiber can also vary substantially. The research pertaining to the performance of PAN fibers in cementitious composites containing coarse aggregates is rather limited, however, there is more evidence in the literature for acrylic fibers in pastes and mortars, likely due to the fact that PAN fibers are predominantly Materials 2021, 14, 409 6 of 44 micro in form. Despite a relatively wide availability, acrylic microfibers have a higher price compared to other low-strength synthetic fibers. Aramid Fibers Aromatic polyamide is a polymer in which at least 85% of the amide group is bound directly to two aromatic rings [23]. Known in short form as aramid, this fiber has many high-performance applications, owing to its high strength and elastic modulus relative to most other synthetic fibers. Aramid fibers are 2.5 times stronger than silica glass fibers and 5 times stronger than steel fibers per unit weight [16]. These unique qualities have drawn attention to aramid fibers for applications as reinforcement in cementitious matrices. The two most common types of aramid fibers are marketed under the trade names Kevlar and Technora. These two fibers possess different properties, mainly due to differences in their production methods. Kevlar is produced by dry and wet spinning of a sulfuric acid solution of aromatic polyamide, while Technora fiber production does not utilize acid spinning [24]. Aramid fibers are expensive, and they can be hardly found in the concrete fiber market. Glass Fibers In this study, glass fibers refer to the fibers that are derived from naturally occurring minerals or rocks, consisting of (SiO 2 ) n monomers. Glass fibers are manufactured by extruding melted parent material into a filament form. During the extrusion process, the filaments are coated with a material called sizing, which equips the fibers with the desired surface texture and interfacial properties for the matrix within which they will be used. With regards to glass fibers used in concrete, individual sizing-coated glass filaments are typically gathered into the strands of around 200 filaments and cut to desired lengths. Depending on the production process and intended use, glass strands can be made to disperse back into their filament (microfiber) form when in contact with water (water dispersible) or they can be manufactured to stay in an integral strand (macrofiber) form. A new type of macro glass fiber has been recently developed by impregnating glass strands with an alkali-resistant polymer resin. This type of resin-impregnated fiber follows the same concept as glass fiber-reinforced polymer (GFRP) rebar, only on a smaller scale. The two main types of glass fibers that have been frequently used in practice as reinforcement in cementitious composites include silica glass and basalt glass. Due to the chemical similarity of their parent materials, the final fiber products are also chemically similar. Basalt and silica glass fibers contain high amounts of silicon dioxide (typically 40% to 70%), depending on the composition of the parent material. The main difference between basalt and silica glass fibers is that basalt glass fibers tend to have significant levels of iron, potassium, magnesium, and sodium oxides, while silica glass fibers typically have low dosages of the mentioned oxides, but they can contain a significant dosage of boron oxides [25]. Although their production methods are similar overall, the production of silica glass fibers often involves the use of additives to improve the physical properties of this type of fiber. Basalt glass fiber production, however, does not require additives, resulting in less consistent fiber properties in the finished product. Furthermore, basalt fiber production is usually a simpler process, making basalt glass fibers less expensive than silica glass fibers [26]. Table 2 summarizes the main properties of silica and basalt glass fibers, although these properties remain highly dependent on the fiber's parent material and manufacturing process. Silica Glass Fibers The first type of glass fibers used for concrete reinforcement was E (or electrical grade) glass. The E glass was originally developed for use in electrical applications. The material was found to have satisfactory mechanical properties and was then tested for use as fiber reinforcement in polymer matrices and eventually cementitious matrices. Due to the degradation of glass fibers in concrete, however, alkali-resistant (AR) glass fibers were later developed. AR glass fibers have a relatively high tensile strength and elastic modulus compared to most polymer fibers. The most common application of AR glass concrete fibers is thin sheet components for exterior façade panels [16]. Such panels are typically made from pastes or mortars that include high fiber volumes [27]. Due to the use of AR glass fibers primarily as thin sheet components, AR glass textile concrete has been developed, in which two-or three-dimensional woven glass fabrics are cast into mortars using a lay-up technique to produce several layers of aligned glass fiber reinforcement [28]. AR glass fibers, which are expected to maintain a set of minimum requirements based on ASTM C1666 [29], are very common and widely available in the concrete market. In general, the cost of AR glass fibers is relatively low. Basalt Glass Fibers In recent years, basalt glass fibers have received growing attention in the fiber concrete industry. Basalt glass fibers typically have a higher elastic modulus and tensile strength than silica glass fibers, as listed in Table 2. They are anticipated to gain further popularity in the concrete market as their production increases and unit cost drops. The recent popularity of basalt concrete fibers has provided abundant research in the literature on their contributions to the properties of fresh and hardened FRC. Basalt microfibers are available from a number of concrete fiber suppliers. Basalt fiber-reinforced polymer (BFRP) macrofibers are somewhat less popular in the market to date; however, they are available from select suppliers. Scope and Organization The state-of-the-art review presented through this study aims to provide a holistic guide on the capabilities, limitations, and potential applications of different types of polymer and glass fibers used in concrete categorized by their effects on various fresh and hardened properties of FRC. Building on the past efforts [22,[30][31][32][33][34][35][36][37][38], a significant number of relevant studies have been reviewed and their main observations and conclusions have been synthesized. This has led to detailed comparisons with the ultimate goal of shedding light on the effects of fiber type on the stability and bond (Section 2), workability (Section 3), pre-peak mechanical properties (Section 4), post-peak mechanical properties (Section 5), shrinkage (Section 6), and extreme temperature resistance (Section 7) of FRC products from both scientific and practical perspectives. Finally, a detailed comparison of the reviewed fibers has been synthesized to further elaborate on their capabilities and limitations (Section 8). It is important to note that the reported outcomes are dependent on not only the fiber material and volume dosage, but also the fiber dimension and matrix composition used in the individual study, which should be considered when comparing experimental test results across the studies and fiber types. This review study is expected to benefit a wide spectrum of researchers and engineers in the concrete industry to properly choose the most appropriate concrete fibers, based on their target applications and desired performance characteristics. Stability and Bond Concrete's matrix is known to be corrosive to certain materials due to its very high pH value, originating from highly alkaline hydration products. Therefore, it is important to ensure the chemical stability of any fibers added to FRC before investigating their properties and suggesting possible applications. Stability of fibers refers to their ability to withstand the concrete environment during their expected service life without experiencing material degradation or dimensional alteration, hence, maintaining their efficiency. In particular, when fibers undergo deterioration, they can lose a portion of their cross-section, which adversely affects their load-carrying capacity, diminishing the advantages of FRC over plain concrete. As for bond characteristics, the bond between a fiber and its surrounding concrete is quantified as the force needed for the fiber to either be pulled out of the concrete matrix or experience rupture. When a bulk of discontinuous fibers are added to concrete, a combination of the aforementioned failure modes is expected under external loads, based on the fiber-matrix bond, which is influenced by the concrete's properties and fiber's characteristics. As stated in Banthia [39], fiber properties (e.g., type, shape, length, and coating), matrix characteristics (e.g., water-to-cement ratio, aggregate size, and admixtures), and environmental and loading conditions contribute to the pull-out behavior of a fiber in concrete. The fiber-matrix bond can be purely mechanical or a combination of mechanical and chemical. Fibers made of chemically inert materials remain unreacted in concrete, thus, the only mechanism to resist the load applied to them is the friction between the individual fibers and their surrounding concrete matrix. On the other hand, when a reaction forms between the fibers and the concrete matrix, the chemical bond helps with the friction resistance, delivering a combination of mechanical and chemical bonds. The fiber-matrix bond characteristics are particularly important in FRC products because they directly affect both pre-peak and post-peak mechanical properties. Polypropylene Fibers The hydrophobic nature of PP fibers often results in an overall weak fiber-matrix bond, leading to a mode of failure governed by the fiber pull-out under external loads. However, if the concrete matrix's strength is increased sufficiently, or an appropriate mechanical anchorage is provided to the fibers (with geometric modifications), the mode of failure can change to the rupture of individual fibers, utilizing their full capacity. Cifuentes et al. [40] confirmed this assessment by reporting that PP fibers fail due to pull-out in low-and normal-strength concrete, while they failed because of a rupture in high-strength concrete. In order to increase the fiber-matrix bond strength, the fibrillated PP fiber's bond can be improved by splitting a PP fiber into fibrillated bundles during the mixing process ( Figure 1a). The monofilament PP fibers, on the other hand, can have their bond strengths improved by shape variations. Oh et al. [41] tested straight, crimped, hooked, button end, twisted, sinusoidal, and partially-sinusoidal shape synthetic macrofibers for their bond strengths. The study concluded that the crimped and sinusoidal shape monofilament PP fibers exhibit the highest improvement in bond properties, compared to straight monofilament PP fibers. Another common way to increase the fiber-matrix bond strength of monofilament PP fibers is by twisting the straight fibers along their longitudinal axis, or indenting their surfaces ( Figure 1b). Yin et al. [42] indicated that diamond surface indentations are more effective than line indentations in increasing the bond of macro PP fibers. [43], (b) surface-indented PP fibers [42], and (c) pre-treated PP fibers with microbiallyinduced calcite precipitation [44]. Chemical pre-treatments can also be adopted to increase the fiber-matrix bond for PP fibers. López-Buendía et al. [45] used alkaline surface treatment and found that the adhesion of macro PP fibers to the concrete matrix increases, as a result of longitudinal roughness. In addition, the cited study showed how a chemical adhesion between the individual fibers and the concrete matrix contributes to increasing the flexural strength of FRC products. In a study performed by Hao et al. [44], the microbially-induced calcite precipitation pre-treatment method was investigated. The outcome showed the success of this method in improving the bond between the treated PP fiber and the mortar matrix. This was explained by noting that the calcium carbonate produced as a result of microbial activities increases the surface roughness of the macro PP fiber, as depicted in Figure 1c. Further to the aforementioned methods, high-tenacity PP fibers can be utilized to develop sufficient bond, providing significant post-crack residual strength and toughness. In a recent development, a new type of PP fiber has been produced with the ability to chemically bond with the concrete matrix. When this new type of macro PP fiber was compared to a traditional type of macro PP fiber, both in monofilament form, the fiber pull-out capacity was found to improve by more than 30% [46]. Nylon Fibers The pull-out behavior of nylon fibers from the concrete matrix is known to be very similar to PP fibers [47]. However, the amide group (-CO-NH-) in nylon fibers forms a reaction with water, which absorbs moisture into the fibers and causes them to swell [48]. By examining the surface of the nylon fibers during pull-out tests, it was noted that the [43], (b) surface-indented PP fibers [42], and (c) pre-treated PP fibers with microbiallyinduced calcite precipitation [44]. Chemical pre-treatments can also be adopted to increase the fiber-matrix bond for PP fibers. López-Buendía et al. [45] used alkaline surface treatment and found that the adhesion of macro PP fibers to the concrete matrix increases, as a result of longitudinal roughness. In addition, the cited study showed how a chemical adhesion between the individual fibers and the concrete matrix contributes to increasing the flexural strength of FRC products. In a study performed by Hao et al. [44], the microbially-induced calcite precipitation pre-treatment method was investigated. The outcome showed the success of this method in improving the bond between the treated PP fiber and the mortar matrix. This was explained by noting that the calcium carbonate produced as a result of microbial activities increases the surface roughness of the macro PP fiber, as depicted in Figure 1c. Further to the aforementioned methods, high-tenacity PP fibers can be utilized to develop sufficient bond, providing significant post-crack residual strength and toughness. In a recent development, a new type of PP fiber has been produced with the ability to chemically bond with the concrete matrix. When this new type of macro PP fiber was compared to a traditional type of macro PP fiber, both in monofilament form, the fiber pull-out capacity was found to improve by more than 30% [46]. Nylon Fibers The pull-out behavior of nylon fibers from the concrete matrix is known to be very similar to PP fibers [47]. However, the amide group (-CO-NH-) in nylon fibers forms a reaction with water, which absorbs moisture into the fibers and causes them to swell [48]. By examining the surface of the nylon fibers during pull-out tests, it was noted that the pull-out capacity increases during the loading process because the concrete matrix scars the outside surface of the nylon fiber, effectively increasing the friction between the fiber and the concrete matrix. A similar observation was also made for the micro PP fibers [47]. According to Yap et al. [49], nylon fibers outperformed fibrillated PP fibers in compressive strength tests, due to the hydrolysis of the nylon's amide group and the subsequent swelling of the fibers, which increased the bond between the nylon fibers and the surrounding concrete. The overall bond behavior of nylon fibers was documented by Khan and Ali [50]. In the cited study, when 50 mm long nylon fibers were tested in a normal-strength concrete under flexural loads, about 70% of the nylon fibers were found to fail due to pull-out, while the remaining 30% failed because of rupture. Polyvinyl Alcohol Fibers PVA fibers are characterized as hydrophilic, have a non-circular cross section, and form hydrogen bonds with the concrete matrix. These characteristics give PVA fibers the ability to form a strong bond in FRC applications [22], which is estimated to be eight times higher than the PP fiber's bond strength [51]. Although PVA fibers are hydrophilic, they have a very low water absorption. PVA fibers are also very compatible with the chemical environment of the concrete matrix, retaining nearly their entire strength after accelerated aging tests equivalent to 100 years [52]. Despite an excellent resistance to acidic and alkaline environments, Roque et al. [53] reported that PVA fibers can show degradation in seawater environments, especially after repeated wetting and drying cycles. It has been indicated in several studies that PVA fibers form both chemical and mechanical bond with the concrete matrix. Through scanning electron microscope (SEM) investigations, Zhao and He [54] revealed the precipitation of the C-S-H gel on PVA fibers. Furthermore, Li et al. [55] showed that the pulled-out PVA fibers undergo a notable diameter loss, as shown in Figure 2, which reflects the strong bond between the PVA fiber and the surrounding matrix. Due to the ability of PVA fibers to chemically bond with the concrete matrix, there is no need to alter the geometric shape of this type of fiber. Thus, PVA fibers are often manufactured in a monofilament form for both macro and micro sizes. PVA fibers tend to fail by rupture rather than pull-out much faster than other fiber types. This has been attributed to a slip-hardening response, originating from strong fiber-matrix bond properties [56,57]. Additionally, it has been reported that the response of PVA fibers shifts from ductile to brittle, as the fiber-matrix bond increases over time. The fiber failure mode can also shift from pull-out to rupture, depending on concrete matrix properties [58]. 14, x FOR PEER REVIEW 10 of 45 pull-out capacity increases during the loading process because the concrete matrix scars the outside surface of the nylon fiber, effectively increasing the friction between the fiber and the concrete matrix. A similar observation was also made for the micro PP fibers [47]. According to Yap et al. [49], nylon fibers outperformed fibrillated PP fibers in compressive strength tests, due to the hydrolysis of the nylon's amide group and the subsequent swelling of the fibers, which increased the bond between the nylon fibers and the surrounding concrete. The overall bond behavior of nylon fibers was documented by Khan and Ali [50]. In the cited study, when 50 mm long nylon fibers were tested in a normal-strength concrete under flexural loads, about 70% of the nylon fibers were found to fail due to pullout, while the remaining 30% failed because of rupture. Polyvinyl Alcohol Fibers PVA fibers are characterized as hydrophilic, have a non-circular cross section, and form hydrogen bonds with the concrete matrix. These characteristics give PVA fibers the ability to form a strong bond in FRC applications [22], which is estimated to be eight times higher than the PP fiber's bond strength [51]. Although PVA fibers are hydrophilic, they have a very low water absorption. PVA fibers are also very compatible with the chemical environment of the concrete matrix, retaining nearly their entire strength after accelerated aging tests equivalent to 100 years [52]. Despite an excellent resistance to acidic and alkaline environments, Roque et al. [53] reported that PVA fibers can show degradation in seawater environments, especially after repeated wetting and drying cycles. It has been indicated in several studies that PVA fibers form both chemical and mechanical bond with the concrete matrix. Through scanning electron microscope (SEM) investigations, Zhao and He [54] revealed the precipitation of the C-S-H gel on PVA fibers. Furthermore, Li et al. [55] showed that the pulled-out PVA fibers undergo a notable diameter loss, as shown in Figure 2, which reflects the strong bond between the PVA fiber and the surrounding matrix. Due to the ability of PVA fibers to chemically bond with the concrete matrix, there is no need to alter the geometric shape of this type of fiber. Thus, PVA fibers are often manufactured in a monofilament form for both macro and micro sizes. PVA fibers tend to fail by rupture rather than pull-out much faster than other fiber types. This has been attributed to a slip-hardening response, originating from strong fiber-matrix bond properties [56,57]. Additionally, it has been reported that the response of PVA fibers shifts from ductile to brittle, as the fiber-matrix bond increases over time. The fiber failure mode can also shift from pull-out to rupture, depending on concrete matrix properties [58]. Polyolefin Fibers PO fibers are very compatible with the concrete matrix and do not degrade in the concrete environment. The PO fiber-matrix bond is mechanical in nature [59]. Depending on the manufacturing technique, macro PO fibers can be made with surface indentations Polyolefin Fibers PO fibers are very compatible with the concrete matrix and do not degrade in the concrete environment. The PO fiber-matrix bond is mechanical in nature [59]. Depending on the manufacturing technique, macro PO fibers can be made with surface indentations to enhance their mechanical bond properties [18]. It has been suggested that, since PO fibers have a low superficial hardness, their mechanical bond can be increased as a result of micro-scale surface imperfections that form because of damage to the fibers at the time of mixing. As expected, the bond properties between the PO fibers and the concrete matrix improve with the progress of cement hydration [60]. In particular, with SEM investigations, Han et al. [61] found silica fume helpful in improving the bond between the PO fibers and the concrete matrix. Relatively low-modulus PO fibers were observed to be most effective, along with silica fume, when 25 mm fibers were used in place of 50 mm fibers for improving the mixture's strength and ductility characteristics [61]. Carbon Fibers Carbon fibers are chemically inert, and as a result, do not undergo strength deterioration in the concrete environment [18,[62][63][64]. Therefore, carbon fibers can only form mechanical bonds with the concrete matrix. Fibers with a high modulus of elasticity, such as carbon fibers, tend to pull out rather than rupture under the external loads applied to FRC. This, however, also depends on the matrix strength and the fiber dimensions, as well as the contact surface area between the fibers and the concrete matrix. Pitch carbon fibers in mortar were found to have sufficient strength to fail by pull-out, unless latex is used to enhance the fiber-matrix bond, in which case the failure mode can shift to fiber rupture [65]. Polyethylene Fibers High-strength polyethylene (HSPE) fibers are chemically inert, providing high stability and degradation resistance in the concrete environment, in addition to high resistance against acids and seawater. Recycled PE fibers also adequately withstand the alkalinity of the concrete environment [66]. The HSPE fibers have a low coefficient of friction, causing them to form a weak bond with their surrounding matrix [22,67]. The bond strength of the HSPE fibers, however, can be improved by surface treatments. Wu and Li [68] studied such treatments and reported that the fibers can form a bond with the concrete matrix up to a 1.0 MPa strength if a surface finish is applied to them to increase their friction coefficient. Additionally, it was found that plasma treatment of the fibers can considerably increase the fiber-matrix bond strength [68]. In a separate study, He et al. [69] showed that coating the HSPE fibers with carbon can increase their frictional bond strength by more than 20%. As stated by Pešic et al. [66], recycled high-density PE fibers often fail due to the pull-out caused by mechanical friction, while they undergo high elongations before being pulled out of the concrete matrix. Through SEM investigations, the cited study confirmed that those fibers do not form a chemical bond with the concrete matrix. Polyester Fibers Despite an overall promise, the main concern with the use of polyester fibers in cementitious composites is the uncertainty with regards to their stability in the highly alkaline environment of concrete. Most of the available studies have reported some level of degradation after a prolonged exposure of this type of fibers to extreme environments. Kim et al. [70] studied recycled PET FRC for strength retention after exposure to alkaline and acidic solutions. It was reported that an exposure to such solutions not only reduces the strength of PET fibers, but also significantly deteriorates the physical and mechanical properties of the entire concrete matrix. These observations were further supported by Fraternali et al. [71], which reported that, after 12 months in an aggressive seawater curing environment, toughness of recycled PET FRC dropped by more than half. In a separate effort, Rostami et al. [72] confirmed the past findings and reported a tensile strength loss over time. Additionally, Silva et al. [73] showed that PET FRC can suffer from the loss of toughness during the expected service life. The cited study used SEM to characterize fiber degradation under a prolonged exposure to an alkaline environment. The outcome captured surface irregularities (as shown in Figure 3), while in some regions, complete degradation of the fibers was evident. degradation under a prolonged exposure to an alkaline environment. The outcome captured surface irregularities (as shown in Figure 3), while in some regions, complete degradation of the fibers was evident. Contrary to the studies that confirmed the degradation of PET fibers in alkaline environments, Ochi et al. [74] concluded that recycled PET fibers undergo negligible degradation after 120 h in an alkaline environment at 60 °C. This was quantified through direct tensile tests on individual fibers. This conclusion should be considered cautiously, as the alkaline exposure of the tested fibers may not have been long enough to relate the results to long-term durability considerations. Regardless, there is sufficient evidence in the literature to conclude that PET fibers can undergo some level of degradation in the concrete environment, which is a major limitation to the fiber's reinforcing potential. PET fibers can have variable chemical and mechanical properties, depending on their manufacturing techniques. Similar to other polymeric fibers, the fiber-matrix bond of polyester fibers is reported to be only mechanical in nature [16]. Acrylic Fibers Early forms of acrylic fibers exhibited low strength and elastic modulus, as well as poor resistance to acids and alkalis, which limited their applications in concrete [18]. However, the new generation of acrylic fibers have shown little to no sensitivity to the alkalinity of concrete. Some research studies have reported small long-term sensitivity to alkaline environments, especially at higher temperatures [47], while others have reported that acrylic fibers are not sensitive to chemical degradation [75,76]. Hahne et al. [77] studied the performance of FRC made with high-strength PAN fibers. The study explored highstrength acrylic fibers of different lengths (i.e., 6-24 mm) and diameters (i.e., 18-104 micrometers), as well as strengths (up to 1000 MPa) and elastic moduli (up to 19.5 GPa) for their fiber-matrix bond properties. It was reported that acrylic fibers form a satisfactory bond with the concrete matrix, due to their irregular cross-sectional shapes. Confirming this assessment, Jamshidi and Karimi [76] exploited 3-4 mm fibers and found that acrylic fibers, similar to nylon fibers, form a stronger bond with the concrete matrix, in comparison to PP fibers, partly due to the formation of cement hydration products on the fiber surface, as illustrated in the SEM image provided in Figure 4 [76]. Contrary to the studies that confirmed the degradation of PET fibers in alkaline environments, Ochi et al. [74] concluded that recycled PET fibers undergo negligible degradation after 120 h in an alkaline environment at 60 • C. This was quantified through direct tensile tests on individual fibers. This conclusion should be considered cautiously, as the alkaline exposure of the tested fibers may not have been long enough to relate the results to long-term durability considerations. Regardless, there is sufficient evidence in the literature to conclude that PET fibers can undergo some level of degradation in the concrete environment, which is a major limitation to the fiber's reinforcing potential. PET fibers can have variable chemical and mechanical properties, depending on their manufacturing techniques. Similar to other polymeric fibers, the fiber-matrix bond of polyester fibers is reported to be only mechanical in nature [16]. Acrylic Fibers Early forms of acrylic fibers exhibited low strength and elastic modulus, as well as poor resistance to acids and alkalis, which limited their applications in concrete [18]. However, the new generation of acrylic fibers have shown little to no sensitivity to the alkalinity of concrete. Some research studies have reported small long-term sensitivity to alkaline environments, especially at higher temperatures [47], while others have reported that acrylic fibers are not sensitive to chemical degradation [75,76]. Hahne et al. [77] studied the performance of FRC made with high-strength PAN fibers. The study explored high-strength acrylic fibers of different lengths (i.e., 6-24 mm) and diameters (i.e., 18-104 micrometers), as well as strengths (up to 1000 MPa) and elastic moduli (up to 19.5 GPa) for their fibermatrix bond properties. It was reported that acrylic fibers form a satisfactory bond with the concrete matrix, due to their irregular cross-sectional shapes. Confirming this assessment, Jamshidi and Karimi [76] exploited 3-4 mm fibers and found that acrylic fibers, similar to nylon fibers, form a stronger bond with the concrete matrix, in comparison to PP fibers, partly due to the formation of cement hydration products on the fiber surface, as illustrated in the SEM image provided in Figure 4 Aramid Fibers A limitation of aramid fibers for use as a concrete fiber is the lack of clarity in the literature about the level of strength degradation of this type of fibers in the concrete environment [8]. Uomoto and Nishimura [78] found that the sensitivity of aramid fibers to chemical deterioration has a correlation to the method used for manufacturing the fibers. The cited study reported that aramid fibers that were acid spun (i.e., Kevlar) underwent degradation at high temperatures (80 °C and above) in acidic, alkaline, and distilled water solutions. Aramid fibers that were not acid spun (i.e., Technora) had much better chemical durability in similar solutions. Although the degradation of Technora aramid was an issue at high temperatures, such temperatures are not expected to be encountered in most concrete applications. Derombise et al. [79] studied the alkali resistance of Technora aramid fibers and reported that, despite small amounts of chain degradation after alkali exposures, the fibers retain nearly all of their mechanical properties. It is important to note that the tests were performed with pH values up to 11, while concrete provides an environment with higher pH values, which can exacerbate the alkali deterioration of the fibers. Uomoto and Nishimura [78] reported that aramid fibers were capable of retaining 90%, 60-85% and 45% of their strength after long-term aging in an alkaline, acidic, and ultraviolet exposure environment, respectively. Additionally, aramid fiber-reinforced polymer (AFRP) showed increased alkali resistance compared to monofilament aramid fibers [78]. Overall, the available studies suggest that aramid fibers can be sensitive to alkali degradation, however, if the fibers are not acid spun and high temperatures are not anticipated through the service life of FRC products, alkali degradation of the aramid fibers in concrete is not expected to be an issue. Kevlar fibers are reported to have a weak bond with Aramid Fibers A limitation of aramid fibers for use as a concrete fiber is the lack of clarity in the literature about the level of strength degradation of this type of fibers in the concrete environment [8]. Uomoto and Nishimura [78] found that the sensitivity of aramid fibers to chemical deterioration has a correlation to the method used for manufacturing the fibers. The cited study reported that aramid fibers that were acid spun (i.e., Kevlar) underwent degradation at high temperatures (80 • C and above) in acidic, alkaline, and distilled water solutions. Aramid fibers that were not acid spun (i.e., Technora) had much better chemical durability in similar solutions. Although the degradation of Technora aramid was an issue at high temperatures, such temperatures are not expected to be encountered in most concrete applications. Derombise et al. [79] studied the alkali resistance of Technora aramid fibers and reported that, despite small amounts of chain degradation after alkali exposures, the fibers retain nearly all of their mechanical properties. It is important to note that the tests were performed with pH values up to 11, while concrete provides an environment with higher pH values, which can exacerbate the alkali deterioration of the fibers. Uomoto and Nishimura [78] reported that aramid fibers were capable of retaining 90%, 60-85% and 45% of their strength after long-term aging in an alkaline, acidic, and ultraviolet exposure environment, respectively. Additionally, aramid fiber-reinforced polymer (AFRP) showed increased alkali resistance compared to monofilament aramid fibers [78]. Overall, the available studies suggest that aramid fibers can be sensitive to alkali degradation, however, if the fibers are not acid spun and high temperatures are not anticipated through the service life of FRC products, alkali degradation of the aramid fibers in concrete is not expected to be an issue. Kevlar fibers are reported to have a weak bond with the concrete matrix due to their smooth surface, inert nature, and high crystallinity [80]. To address this issue, Zhang et al. [81] conducted a chemical treatment on Kevlar fibers and observed that treated fibers can have a more roughened surface (and a better fiber-matrix bond) in comparison to untreated fibers. Glass Fibers Despite superior mechanical properties, the main limitation of glass fibers in cementitious composites is their chemical sensitivity to alkaline environments. The alkali degradation of non-alkali resistant glass fibers are well documented in the literature [78,82,83]. Wu et al. [83] studied the durability of basalt and silica glass fibers after exposure to acid, alkali, and salt solutions. The study found that both basalt and silica glass fibers undergo full deterioration and retain none of their original strength characteristics after an extended exposure, noting the fact that the deterioration mechanism of basalt and silica glass fibers in concrete is similar, owing to their similar chemical compositions [83]. In the cited study, both fibers showed better resistance to salt solutions, although an approximately 40% loss in the tensile strength was recorded. The alkali deterioration was characterized by the pitted fiber surface. Similarly, Scheffler et al. [84] reported the formation of a shell around the fibers after 7 days of immersion in 5% NaOH solution, as shown in Figure 5. Such a deterioration was found to sacrifice the effective cross-section (and associated strength) of the fibers. the concrete matrix due to their smooth surface, inert nature, and high crystallinity [80]. To address this issue, Zhang et al. [81] conducted a chemical treatment on Kevlar fibers and observed that treated fibers can have a more roughened surface (and a better fibermatrix bond) in comparison to untreated fibers. Glass Fibers Despite superior mechanical properties, the main limitation of glass fibers in cementitious composites is their chemical sensitivity to alkaline environments. The alkali degradation of non-alkali resistant glass fibers are well documented in the literature [78,82,83]. Wu et al. [83] studied the durability of basalt and silica glass fibers after exposure to acid, alkali, and salt solutions. The study found that both basalt and silica glass fibers undergo full deterioration and retain none of their original strength characteristics after an extended exposure, noting the fact that the deterioration mechanism of basalt and silica glass fibers in concrete is similar, owing to their similar chemical compositions [83]. In the cited study, both fibers showed better resistance to salt solutions, although an approximately 40% loss in the tensile strength was recorded. The alkali deterioration was characterized by the pitted fiber surface. Similarly, Scheffler et al. [84] reported the formation of a shell around the fibers after 7 days of immersion in 5% NaOH solution, as shown in Figure 5. Such a deterioration was found to sacrifice the effective cross-section (and associated strength) of the fibers. To reduce the degradation of glass fibers in alkali environments, zirconium oxides are added to the glass fiber production process to produce alkali-resistant (AR) glass fibers. The degradation prevention provided by the presence of zirconium oxides in glass fibers is because the Zr-O bonds are stable under alkali attacks, in contrast to the Si-O bonds, which break in the presence of hydroxides. This leads to a zirconium dioxide protective layer on the exposed fiber surface, serving as a barrier to prevent possible fiber breakdowns [18]. Adding zirconium oxides to glass fibers has become a common practice for the modification of silica glass fibers used in the concrete industry. Basalt glass fibers, however, have been subject to fewer research studies and are less common. Among the limited studies on AR basalt glass fibers, Lipatov et al. [85] can be highlighted. Despite the increased stability of AR glass fibers in alkaline environments, there is sufficient evidence that they undergo some level of strength degradation in concrete. Based on the literature [86,87], AR glass FRC loses strength and ductility in tension and flexure as time progresses in natural weathering, underwater, and accelerated aging environments. The strength loss depends on pH, temperature, and chemical composition of the AR glass fibers and the concrete matrix, as well as the exposure condition [88,89]. To reduce the degradation of glass fibers in alkali environments, zirconium oxides are added to the glass fiber production process to produce alkali-resistant (AR) glass fibers. The degradation prevention provided by the presence of zirconium oxides in glass fibers is because the Zr-O bonds are stable under alkali attacks, in contrast to the Si-O bonds, which break in the presence of hydroxides. This leads to a zirconium dioxide protective layer on the exposed fiber surface, serving as a barrier to prevent possible fiber breakdowns [18]. Adding zirconium oxides to glass fibers has become a common practice for the modification of silica glass fibers used in the concrete industry. Basalt glass fibers, however, have been subject to fewer research studies and are less common. Among the limited studies on AR basalt glass fibers, Lipatov et al. [85] can be highlighted. Despite the increased stability of AR glass fibers in alkaline environments, there is sufficient evidence that they undergo some level of strength degradation in concrete. Based on the literature [86,87], AR glass FRC loses strength and ductility in tension and flexure as time progresses in natural weathering, underwater, and accelerated aging environments. The strength loss depends on pH, temperature, and chemical composition of the AR glass fibers and the concrete matrix, as well as the exposure condition [88,89]. ASTM C1666 [29] includes minimum specifications for AR glass fibers to be used in cementitious matrices. It can be noted that minimum strength retention values (expected after four days in hot water) are only 25% for water dispersible strands and 35% for integral strands when considering the lower bound of 1.0 GPa, as the initial fiber tensile strength. This lack of stringency in the standard shows that AR glass fiber strength degradation can be relatively large, while the fibers are still considered alkali-resistant. This will be discussed separately for silica and basalt glass fibers in the following sections. Silica Glass Fibers It is generally accepted that AR silica glass fibers mixed in cementitious matrices lose some of their reinforcing effectiveness over time because of their chemical sensitivity to the alkaline environment, as explained in the previous section [84,90]. In order to help improve the long-term performance of AR glass FRC, Song et al. [91] investigated modifying the binder with a partial replacement of ordinary Portland cement with calcium sulfoaluminate cement. The study found that the proposed method greatly improves the long-term performance of the composites. After 10 years of aging, the modified composites retained substantial ductility compared to the control specimens, which showed no postcrack residual strength after 10 years of exposure. The cited study clearly reflects that, if proper mixture designs are used, glass fiber degradation can be mitigated. This can involve the use of pozzolans, such as silica fume, metakaolin, Class C and F fly ash, and pulverized borosilicate glass (also referred to as E glass). In addition to adding zirconium to the chemical structure of glass, applying alkaliresistant sizing to the filament surface during production, or changing the chemistry of the concrete matrix, glass fiber strands can be impregnated with alkali-resistant and surfacebonding resins, such as epoxy and vinyl ester, to improve their long-term durability. These types of polymer-impregnated glass fibers are made into the macro concrete fibers that are relatively new to the concrete industry and are essentially miniature versions of GFRP rebars. The alkali degradation of GFRP macrofibers has not been fully described in the literature, however, due to the similarities that these fibers share with GFRP rebars, the research related to the durability of GFRP rebars can be cautiously extrapolated to evaluate the long-term durability of GFRP macrofibers. The investigations that utilized accelerated aging techniques report concerns about the durability of glass-based fibers in concrete. Significant amounts of degradation and strength loss have been found, especially under high temperature and aggressive chemical environments [92][93][94]. The past studies sparked major concerns in the concrete industry about the level of safety provided by the structures that use GFRP, as primary reinforcement. These concerns motivated several case studies and critical reviews to characterize the level of GFRP strength degradation for in-service structures [95][96][97][98]. The listed efforts found that the degradation reported from accelerated aging tests on GFRP products largely overestimates the actual level of degradation in the field. Several case studies reported little to no GFRP degradation for in-service structures, owing to the effective protection provided by the polymer resin. The studies also concluded that the accelerated aging tests are not necessarily representative of the in-situ concrete condition, because of the use of elevated temperatures and the unlimited supply of hydroxyl ions [97]. Although several studies have focused on the bond properties of silica GFRP bars in concrete, limited studies are available concerning the bond properties of silica glass fibers in concrete, which can be highly different from the bond properties of silica GFRP bars, due to differences in their size and shape. In a study completed by Scheffler et al. [99], the pull-out properties of AR glass fibers with two types of sizing were investigated under quasi-static and high-rate (i.e., impact) loading protocols. Upon measuring the local interfacial shear strength and critical energy release rate, it was found that regardless of the sizing type, the interfacial friction stress undergoes a reduction when a high-rate load is applied, mainly because of smoothing the surface asperities of AR glass fibers. The study concluded that it is possible to control the pull-out behavior of AR glass fibers through adopting an appropriate sizing, which can be significantly helpful to adjust FRC's post-peak mechanical properties. Basalt Glass Fibers With regards to chemical durability, basalt glass fibers show an alkali degradation similar to that of E glass fibers [83,100]. In order to overcome this drawback, a range of methods have been examined in the literature, in addition to developing AR basalt glass fibers. Rybin et al. [101] studied the alkali resistance and mechanical properties of basalt glass fibers coated with zirconyl chloride octahydrate. The study found that the surface-coated fibers undergo delayed strength degradation under alkali exposure. This was also attributed to the surface coating thickness and density. Lipatov et al. [85] investigated the addition of zirconium oxides to basalt fibers during their manufacturing process. The study found that the solubility limit of zirconium in basalt glass is 7.1%, i.e., much less than that of silica glass. Despite the inability to reach high zirconium content during manufacturing, the AR basalt glass fibers with 5.7% zirconium content showed an alkali degradation (in terms of weight loss) similar to the AR silica glass fibers with 18.8% zirconium content. The strength degradation of the AR basalt glass fibers was substantially higher than that of the AR silica glass fibers, however, the compressive, tensile, and flexural strengths of the hardened mortars prepared with the basalt glass fibers that had an optimal zirconium content remained similar to those of the mortars prepared with the AR silica glass fibers [85]. Mingchao et al. [102] tested the chemical resistance of AR basalt glass fibers by boiling them in distilled water, salt solution, and acid solution. It was reported that the AR basalt glass fibers undergo stiffness and strength degradation in acid solution. In alkali solution, however, their stiffness was mostly maintained, but their strength underwent a gradual decline. In recent years, similar to AR silica glass fibers, filaments of basalt glass fibers have been impregnated with alkali-resistant polymer resins to create BFRP macrofibers. The same long-term durability aspects discussed in the silica glass fiber section of this review study are valid for basalt glass fibers as well. Considering the lack of studies focusing on basalt glass fibers, this extrapolation can be justified, especially due to the fact that similar alkali-resistant polymer resins are used to impregnate both GFRP and BFRP. According to Jiang et al. [103], the SEM images of the concrete mixtures reinforced with both micro and macro basalt glass fibers reveal that chopped basalt glass fibers are densely covered with hydration products after seven days of curing, which creates a satisfactory bond with the concrete matrix. However, after 28 days of curing, the SEM images show a gap between the individual fibers and the concrete matrix, implying the possibility of debonding in later ages. In a separate effort, Arslan [104] reported the presence of a partial bond between the macro basalt glass fibers and their surrounding concrete, which contribute to increasing the mechanical strength of FRC. Furthermore, the cited study reported that all the fibers failed by pull-out and no fiber rupture was observed. This can be attributed to the high tensile strength of basalt glass fibers, outperforming the fiber-matrix bond strength. Workability Concrete is the most commonly used material in the construction industry. One of the principal reasons for such a widespread application is the concrete's workability in the fresh state, making it possible to form several shapes in various sizes without needing any special treatments [105]. Therefore, properly selecting the FRC ingredients and adjusting their proportions to achieve the desired workability is critical. In particular, the shape, surface area, and dosage of the fibers, along with their water absorption capacity, are among the main deciding factors, which are explored with necessary details in this section. Polypropylene Fibers Mohod [106] reported that PP fibers tend to form undispersed clumps and significantly reduce the slump at volumes above 1.0%. However, this was found highly dependent on fiber dimensions and the mixture design. Dopko et al. [5] had a similar observation, indicating that PP macrofiber additions above 1.0% fiber volume greatly reduce the worka-bility of FRC. In a comparison between PP and nylon fibers (used with a similar dosage in concrete), it was reported by Heo et al. [107] that the addition of PP fibers has a notable effect on the workability of FRC. The cited study also reported that using long PP fibers can further decrease the workability. Nylon Fibers Nylon fibers are hydrophilic and can absorb a small amount of water during the mixing process [16]. Several studies have found this feature beneficial to the dispersion of nylon fibers, comparing to PP fibers. However, at higher volume dosages, the water absorption capacity of the nylon fibers may adversely affect the mixture's workability, due to the excessive absorption of mixing water. Yap et al. [49] noted that the workability of nylon FRC was less than that of PP FRC at the same fiber content in lightweight concrete. This could be due to the fact that, with a fiber volume of up to 0.75% tested in the cited study, the nylon fibers absorbed a significant amount of water, decreasing the workability of the mixture. Khan and Ali [50] reported that 50 mm-long nylon fibers dispersed at fiber volumes close to 1.5% reduce the slump to almost 30% of the slump obtained for the control mixtures that contained no fiber. Polyvinyl Alcohol Fibers When PVA fibers are used in concrete, the workability of concrete drops due to the PVA fiber's water absorption. Hossain et al. [108] evaluated the effect of PVA addition on fresh and rheological properties of self-consolidating concrete (SCC). The cited study observed that PVA microfibers greatly reduce the flowability and passing ability of SCC. In particular, it was reported that the addition of PVA fibers decreases the plastic viscosity of SCC. Additionally, as the fiber content was increased further, the viscosity witnessed a higher reduction. Compared to micro and macro steel fibers, PVA fibers were found to have a more pronounced effect on reducing the flowability and passing ability of SCC. Shafiq et al. [109] reported the need for an increased water-to-cement ratio and a sufficient dosage of superplasticizer to meet the target slump for PVA macrofiber mixtures. The cited study was able to achieve satisfactory workability characteristics with 3.0% PVA fiber volume. Dopko et al. [5] reported difficulties when mixing macro PVA fibers in concrete at volumes over 1.0%, indicating that the fibers tend to re-aggregate and form clumps once a critical volume is reached. The cited study also found that PVA fibers decreased workability and caused dispersion issue at the same fiber volume of PP fibers, even though the PP fibers had a higher aspect ratio than the PVA fibers. Polyolefin Fibers Limitations in the fresh state as a result of adding PO fibers are similar to those previously discussed for PP fibers. PO fibers with surface indentations may further decrease the workability compared to smooth PO fibers, mainly because of the increased surface area per fiber. Several studies have shown that PO fibers can be used in SCC mixtures [110,111]. No significant detrimental effects to workability, however, have been reported in the literature for PO fibers when added in low volumes. Alberti et al. [110] indicated that macro PO fibers with the length of 50 mm can mix well in the SCC at volumes up to 1.0%. However, it should be noted that the cited study utilized a high water-to-cement ratio (i.e., 0.5) to improve workability. Zaroudi et al. [111] reported that the addition of PO fibers with more than 1.0% of volume fraction significantly reduces the slump flow of SCC. Smirnova et al. [112] compared two methods of adding PO fibers to the mixture and concluded that the addition of PO fibers to the fresh concrete and further mixing it for 5 min lead to insufficient dispersion and the agglomeration of fibers. The proposed solution was mixing fibers with the dry constituents (aggregates and cement) for one minute prior to the addition of water and superplasticizer. Noting that the macrofibers are often better dispersed in the concrete matrix compared to microfibers, the maximum volume fraction of macro PO fibers in concrete can be higher than that of micro PO fibers [112]. Carbon Fibers Macro carbon fibers are uncommon since carbon fibers tend to break into shorter lengths during the mixing process because of their brittle nature [113]. In particular, the presence of coarse aggregates can increase the level of carbon fiber damage while mixing, however, such damage can be lessened by using appropriate mixing procedures and additives, such as methyl cellulose and superplasticizer, to further disperse the fibers with minimal mixing requirements [114]. The upper limit of carbon fiber dosage for conventional mixing has been found to be 1.0% by volume, due to the fiber's high aspect ratio and specific surface [8], although higher volumes can be accommodated with modified mixing procedures and including admixtures [115]. Dopko et al. [116] reported adequate workability and dispersion of carbon microfibers in the FRC mixtures that contained up to 0.5% carbon fiber volume. This was achieved by the addition of superplasticizer and a modified mixing procedure to increase the mixing energy. Polyethylene Fibers Zhang and Li [117] reported that the addition of PE fibers decreases the workability of the FRC that contains fly ash and silica fume by reducing both slump and slump flow. PE macrofibers with lower strength and modulus of elasticity, similar to those of PP and PO, have been reported to mix sufficiently well into a normal FRC matrix at volumes up to 4.0%. This is a relatively high fiber volume and it should be noted that a high water-to-cement ratio was utilized in the cited study to help with mixing. High volumes, i.e., in the range of 2.0-4.0%, of high aspect ratio HSPE fibers were used by Yamaguchi et al. [118]. Such fiber volumes did not cause any slump issues because of utilizing a high superplasticizer dosage and a high shear force double-axis mixer. Polyester Fibers Although most of the past studies incorporated 1.0% PET fiber by volume (or lower) into FRC, PET macrofibers were reported to mix well in concrete at 1.5% or even up to 3.0% [74,119]. It must be noted that the cited studies utilized water-to-cement ratios equal to (or above) 0.55, leading to workable mixtures. Acrylic Fibers Addition of PAN fibers from a low volume up to 2.5% was reported to decrease the workability of FRC to the extent that the water-to-cement ratio has to be increased substantially. Superplasticizers can also be needed to accommodate the fiber addition, especially with using low-diameter fibers [77,120,121]. Aramid Fibers Nanni [122] investigated different volumes of AFRP macrofibers dispersed in concrete. AFRP fibers include hundreds of aramid microfibers bound together by resin to form a single macrofiber. The cited study found that AFRP fibers significantly decrease the apparent workability and slump of the concrete. Thus, 2.5% was recommended as the maximum volume fraction of AFRP fibers that can be incorporated into FRC with conventional mixing procedures [122]. Silica Glass Fibers AR glass fibers can be used in the FRC made with conventional mixing procedures. However, it has been reported that high fiber volumes are difficult to achieve when using glass fiber filaments in concrete with conventional mixing procedures, because such fibers tend to disperse into the matrix unevenly, therefore, an increase in water-to-cement ratio or additional mixing becomes required [18]. The additional mixing can potentially damage the fibers and compromise their long-term performance [8]. It should be noted that the effect of AR glass fibers on the workability of conventionally-mixed concrete is highly dependent on the aspect ratio and surface area of the fibers, which can be drastically increased for filament strands compared to integral strands. The study by Ghugal and Deshmukh [123] reported that AR glass microfibers (up to 4.5% of cement weight) were mixed into the FRC that contained coarse aggregates with no mixing difficulties. The cited study employed a high water-to-cement ratio of 0.51 to increase workability, but there was no indication of using water-reducing admixtures. Basalt Glass Fibers Ayub et al. [124] studied how the addition of high volumes (up to 3.0%) of basalt microfibers affects the workability of pozzolanic concrete (made with high-range waterreducing admixtures) and reported no mixing problem. Noting that this was a high microfiber content, achieving a satisfactory slump highlighted that with a proper mixture design and use of admixtures, as well as a high-energy mixer, high volumes of micro glass fibers can be incorporated into the FRC that contain coarse aggregates. In the case of basalt macrofibers, Arslan [104] reported no workability issues when using up to 3.0% of this type of fiber in concrete. This was further supported by reviewing the SEM images that showed basalt macrofibers disperse well in the concrete mixture, contrary to silica glass fibers that tend to form flocculation. Since basalt glass fibers have a density relatively similar to that of the concrete matrix, BFRP macrofibers have been reported to mix well (at volumes up to 4.0%) in concrete using conventional mixing procedures, compared to most other fibers [125]. In a separate study completed by Branston et al. [126], BFRP fibers were found to clump at 2.0% volume, however, no superplasticizer had been used. For the SCC with a maximum aggregate size of 16 mm, it has been reported that BFRP macrofibers with an aspect ratio of 65 are detrimental to flowability at volumes over 1.15%, likely due to the stiffness and size of the fibers [127]. Pre-Peak Mechanical Properties The engineering properties of the FRC used for structural design are often derived from pre-peak mechanical properties, where strength and modulus of elasticity are determined. It is generally accepted that the tensile and flexural strengths of a plain concrete is approximately 10% and 15% of the concrete's compressive strength, respectively [105]. To improve both tensile and flexural strengths, fibers are added to the concrete mixtures in a wide variety of applications. When randomly dispersed fibers are incorporated into the concrete matrix, they act as reinforcement agents against tensile stresses. The main pre-peak mechanical properties of the FRC made with various fiber types are explored in this section. Polypropylene Fibers There are some inconsistencies in the literature as to whether or not PP fibers notably affect the strength parameters of FRC prior to crack propagation. Some studies have reported that the compressive strength of FRC increases or is not affected by the addition of macro PP fibers, but the tensile strength is significantly improved at volumes below 0.55% [128,129]. On the other hand, Cifuentes et al. [40] indicated that PP fibers increase both compressive and splitting tensile strengths of FRC. Some other studies found that low volumes of macro PP fibers had negligible impact on the flexural strength of concrete [130]. Ramezanianpour et al. [131] reported that the addition of PP fibers reduces the concrete's compressive strength, but increases both splitting tensile and flexural strengths up to 40% and 10%, respectively. This trend was reported to be maintained until a fiber dosage of 0.7 kg/m 3 is reached. The inconsistences among the available studies regarding the ability of PP fibers to increase pre-peak strength properties can be attributed to the variations in the fiber's dosage, geometry, and mechanical properties, as well as the characteristics of the concrete matrix. Studies have found that recycled PP fibers can deliver similar mechanical properties, while averting fiber degradation in concrete [42,132]. Nylon Fibers Song et al. [133] observed that the addition of nylon and PP fibers can increase the compressive strength and splitting tensile strength of FRC, where nylon fibers provided a better performance in comparison to PP fibers, owing to their higher tensile strength and elastic modulus. Yap et al. [49] also compared nylon and PP fibers, in terms of the compressive, splitting tensile, and flexural strength of FRC, and reported that, although multi-filament PP fibers had a higher flexural strength, the addition of nylon fibers improved the compressive and tensile strengths further. On the other hand, Zia and Ali [134] found that a 5.0% addition of nylon fibers (by weight of cement) decreases the compressive and splitting tensile strengths of FRC by more than 30% and 10%, respectively. Ozsar et al. [135] reported that nylon microfibers are more effective in increasing the splitting tensile strength of the mixtures with low water-to-cement ratios. This trend was noted to be reversed for nylon macrofibers. Polyvinyl Alcohol Fibers The studies on the use of PVA fibers have reported different results regarding their effects on the mechanical properties of FRC. In particular, it has been found that even low volumes of micro PVA fibers can reduce the compressive strength of FRC significantly [136]. On the other hand, Ahmad and Umar [137] reported that the PVA fiber addition of up to 0.3% of SCC volume contributes to increasing the compressive strength. Noushini et al. [138] showed that 0.25% PVA fiber addition increases the compressive and splitting tensile strengths of FRC, while any further increase in the fiber content can have counter effects on the strength. The splitting tensile and flexural strengths of FRC have been reported by several studies to remain the same or experience an increase with the addition of PVA fibers [5,108,109,137]. However, Yeganeh et al. [136] reported a drop in the flexural strength of FRC, although an increase in the splitting tensile strength was noted. In the absence of any explanation for this observation in the cited study, it is expected that the low water-to-cement ratio of 0.3 hindered the dispersion of PVA fibers in the mixture, especially given the water absorption characteristics of PVA fibers. Polyolefin Fibers Similar to other low-strength synthetic fibers, addition of low volumes of PO fibers to concrete does not have a significant effect on the mechanical properties of FRC. Alberti et al. [110] reported that, for low PO fiber volumes, only the tensile strength slightly increases, while for high PO fiber volumes, the compressive strength decreases slightly, and tensile strength increases substantially. Zaroudi et al. [111] observed that by increasing the fiber content to 1.25%, both compressive and splitting tensile strengths experienced improvements. They, however, both started decreasing upon adding to the fiber content, beyond 1.25%. Furthermore, the flexural strength of FRC was reported to increase with increasing the fiber content [139]. Alani and Beckett [140] investigated the performance of PO fibers (in comparison to hooked end steel fibers) for slab-on-ground reinforcement applications. It was found that the surface-embossed PO macrofibers can provide benefits similar to steel fibers. The volumetric dosage corresponding to the equivalent performance of PO fibers was about one-third higher than that of steel fibers. The study, however, showed that high-tenacity macro synthetic fibers have the potential to be used as the primary reinforcement in certain slab-on-ground applications. Similarly, Alberti et al. [110] described a case study, in which the conventional reinforcing bars of a concrete water pipeline casing had been completely replaced with 5 kg/m 3 PO macrofibers. This led to a satisfactory outcome, as only small tensile stresses were anticipated in the concrete. By eliminating conventional bars, the construction cost and time of construction were both significantly reduced. Carbon Fibers Carbon fibers can improve the mechanical properties of cementitious composites if a sufficient volume is included. The extent of improvement is proportional to the strength and modulus of elasticity of the carbon fibers used. Stronger and stiffer carbon fibers more effectively increase the strength properties, while the weaker ones more likely contribute to enhancing the toughness. Among the limited studies available, Yao et al. [3] tested the FRC made with 0.5% volume high-strength micro carbon fibers and found that the fiber addition increased the compressive strength, splitting tensile strength, and modulus of rupture by 14%, 19%, and 9%, respectively. Chen et al. [141] reported that the addition of carbon fibers increases the compressive strength of the concrete and the best result is achieved when carbon fibers are used at 1.0% of weight of cement, although the cited study did not investigate the effect of higher fiber contents. Dopko et al. [116] tested varying volumes of carbon microfiber, accelerating admixture, and shrinkage-reducing admixture for their effects on the compressive and splitting tensile strength of FRC. The study found that increasing the carbon microfiber volume generally increases the 24-h compressive and splitting tensile strengths of FRC. The presence of 0.3% carbon microfiber also increased the 7-day compressive and splitting tensile strengths by an average of 9.6% and 22.8%, respectively. On the other hand, Chen and Chung [142] reported that, although the addition of carbon fibers increased the flexural strength, the compressive strength of the specimens decreased, most likely because of the increased amount of air content, originating from fiber addition. It should be noted that the cited study used a relatively weak carbon fiber (with a tensile strength of 690 MPa), which can explain the reported findings. Polyethylene Fibers HSPE fibers have shown adequate reinforcing effects in concrete. In the limited studies available, mixtures with HSPE fibers (as low as 0.025%) have exhibited higher flexural strengths compared to those made with 0.1% fibrillated PP fibers [143]. In a separate effort, Yamaguchi et al. [118] explored 2% and 4% (by volume) of HSPE fibers for their effects on compressive, splitting tensile, and flexural strengths and reported an increase in all the strength values because of the HSPE fiber addition. The possibility of using recycled PE fibers for concrete reinforcement has also been investigated. Pešić et al. [66] studies the FRC that contained PE fibers made from recycled consumer products. The fibers used in the cited study had a relatively low yield strength (i.e., 12 MPa compared to 40-80 MPa, which is common for regular HSPE fibers) and a relatively low modulus of elasticity (i.e., 0.5 GPa compared to 0.9-1.1 GPa, which is common for regular HSPE fibers), mainly due to the recycling process. The study found that the FRC pre-peak strength properties were not significantly influenced by the addition of recycled fibers compared to the control mixture that contained no fibers. Polyester Fibers Research has shown that polyester fibers are capable of improving the mechanical properties of concrete. The bulk of research that has been conducted on PET fibers involves monofilament macrofibers made from recycled plastics; however, limited studies have also investigated non-recycled PET fibers. Swamy and Barr [144] tested 20 mm-long polyester fibers with a high aspect ratio at volumes up to 1.0%. The study found that the fiber addition increased the compressive, flexural, and splitting tensile strengths of the hardened composite by 5%, 7%, and 27%, respectively. Sivakumar and Santhanam [145] investigated polyester microfibers dispersed at 0.5% volume in a high-strength concrete matrix and determined that the compressive strength was not significantly affected by the addition of polyester fibers, but the elastic modulus, splitting tensile strength, and flexural strength were all improved. Recycled PET fibers have shown different effects on the concrete's compressive strength, depending on their tensile strength, shape, length, and diameter. Kim et al. [70] compared the performance of recycled PET fibers (made through extruding shredded bottles) to non-recycled PP fibers. Both macro synthetic fibers were 50 mm long with similar aspect ratios, but the PET fibers were surface embossed, while the PP fibers were crimped. The cited study found that for both PP and PET fibers, compressive strength and elastic modulus slightly decrease with increasing the fiber volume. However, the ultimate flexural strength of FRC increased by adding to the fiber volume. Similarly, Borg et al. [119] investigated the FRC made with 0.5%, 1.0%, and 1.5% volume of recycled PET fibers that had been hand cut from waste bottles to two different lengths of 30 mm and 50 mm. Fibers of both lengths were either deformed or straight. The study found that the compressive strength is reduced when PET fibers are present in the mixtures, noting that the largest reductions occur for longer fibers mixed at higher volumes. Fraternali et al. [71] studied the FRC mixtures that had been made with recycled PET macrofibers extruded from resins obtained from melting recycled bottle flakes. Three different recycled PET fibers were obtained each with different geometries and parent resins, giving them different mechanical properties. These three fibers were compared to non-recycled PP macrofibers with an embossed surface texture. The cited study reported that all the PET and PP fibers improved the compressive strength of FRC, although the straight PET macrofibers were able to provide a higher increase in the compressive strength than the embossed PP fibers. In a separate set of efforts, recycled PET fibers were found effective in increasing the flexural strength of FRC, noting that increasing their length can decrease their effectiveness [71,146]. When comparing the FRC products made with recycled PET fibers to those with PP fibers, a relatively similar increase in their mechanical properties is often recorded [70,146], although Rostami et al. [72] reported that PP fibers further enhance the flexural strength of FRC in comparison to PET fibers. Acrylic Fibers PAN fibers with different tensile strengths, lengths, and volumes are reported to have a negligible or negative effect on the compressive strength of FRC [77,120]. Mo et al. [120] used 12 mm-long acrylic fibers up to 0.2% and reported a 7-13% drop in the compressive strength. Furthermore, the cited study reported that the addition of PAN fibers increased both tensile and flexural strength of FRC, especially at a 0.1% dosage. This is in line with the results of Hahne et al. [77], which indicated that PAN fibers with a higher length can further improve the mechanical properties of PAN FRC. Aramid Fibers Zhang et al. [147] investigated aramid microfibers at volumes up to 1.5% in concrete. The study found that 0.5% fiber volume was able to slightly increase the compressive strength and elastic modulus of the composite, however, 1.0% and 1.5% fiber volume mixtures showed a reduced compressive strength and elastic modulus. Nanni [122] reported that AFRP fibers marginally increase the pre-crack flexural and splitting tensile strengths of FRC. The use of twisted macro Technora aramid fibers with 0.5 mm diameter and cut lengths of 30-40 mm was also investigated in the literature [148,149]. The studies found satisfactory results with twisted aramid macrofibers. Chan et al. [148] tested 30 mm and 40 mm long Technora aramid twisted macrofibers dispersed in concrete at 1.0% volume for their effects on the flexural response of steel reinforced concrete beams. The fibers did not significantly affect the compressive strength, but the peak flexural load in the beams was found to increase by about 9%. The fiber length did not cause significant improvements in the flexural test results. However, when compared to hooked end steel fibers, crack widths in the beams were smaller for aramid fibers up to the yield point of embedded steel bars. Silica Glass Fibers Several studies have reported that the addition of glass fibers does not have a significant effect on the compressive strength of FRC, and in some cases, they were found to only marginally increase the compressive strength [104,123,[150][151][152][153]. However, Khan and Ali [50] reported a drop in the compressive strength, which can be attributed to the relatively long glass fibers used in the cited study, although the silica glass fibers had a better performance than the nylon fibers tested. Söylev and Özturan [151] compared the effects of steel, glass, and PP fibers on the compressive strength of the specimens with two water-to-cement ratios of 0.45 and 0.60. The study, which employed both air curing and moist curing methods, found that the mixtures with glass and PP fibers had a rather similar performance, while steel fibers increased the compressive strength of the specimens. In another study, Arslan [104] observed that macro basalt glass fibers have a more pronounced contribution to the compressive strength than silica glass fibers. Kizilkanat et al. [154] reported a similar observation for microfibers. Furthermore, Barluenga and Hernández-Olivares [150] indicated that a low-dosage addition of glass fibers (e.g., 600-900 gr/m 3 ) does not change the flexural strength of FRC. However, by including more silica glass fibers, splitting tensile and flexural strengths of FRC improves [50,104,151,154,155]. Silica glass fibers have shown a higher contribution to splitting tensile and flexural strengths than nylon and PP fibers, while delivering marginally lower strengths in comparison to steel and basalt glass fibers [50,151]. Owing to the developments in the recycling industry, silica glass fibers can be extracted from GFRP, but they are in the form of fiber clusters that contain some remaining polymer and filler materials. Dehghan et al. [156] utilized this technology and examined the mechanical properties of recycled glass FRC and reported an increase in the FRC's tensile strength, despite a decrease in its compressive strength. Basalt Glass Fibers Different results have been reported on the pre-peak mechanical properties of basalt FRC. Yang and Lian [157] found that chopped water dispersible strand (micro) basalt fibers used at 0.3-0.5% volumes is an optimal dosage for increasing the compressive strength of FRC, while other studies reported that the addition of high volumes of micro and macro basalt fibers does not have a significant effect on the pre-peak mechanical properties of FRC [103,124]. On the other hand, Kabay [158] found that micro basalt fibers with 12 mm and 24 mm length dispersed at low volumes (2.0 and 4.0 kg/m 3 ) decrease the compressive strength as the fiber volume increases for both normal and high-strength concrete. These contradictory results originate from different fiber characteristics and different concrete mixtures. While lower fiber contents can improve the packing of concrete, higher fiber contents, along with lower water-to-cement ratios, can signify the negative effects of fiber addition on the compressive strength of FRC. Studies have shown that basalt glass fibers enhance the splitting tensile and flexural strengths of FRC, regardless of the fiber content and fiber length [124,157,158]. Furthermore, Jiang et al. [103] found that longer fibers outperform shorter ones in improving splitting tensile and flexural strengths. Saradar et al. [159] investigated the flexural strength of 12 mm long basalt, steel, glass, PP, and PO fibers with 0.1% of volume fraction. The study reported that all the fibers increased the flexural strength of concrete, however, basalt and steel fibers made the highest contribution followed by PO, glass, and PP fibers. Similarly, other studies confirmed the superior splitting tensile and flexural performance of basalt glass fibers in comparison to silica glass fibers [104,154]. Branston et al. [126] investigated the effectiveness of chopped basalt filament microfiber bundles compared to BFRP macrofibers and concluded that filament basalt fibers can increase pre-crack flexural and compressive strengths in concrete, noting that the BFRP fibers decreased compressive strength and increased flexural strength at higher volumes. In a separate effort, Patnaik et al. [160] reported that BFRP macrofibers increased the flexural strength of concrete with increasing the fiber content. Post-Peak Mechanical Properties The expected performance and service life of reinforced concrete structures can be significantly affected with the occurrence of cracks, as investigated in several studies [161][162][163][164][165][166][167][168] at various length scales [169][170][171][172][173][174][175][176] with the consequences that can go beyond an individual structure [177][178][179]. The addition of fibers can address this issue, as the fibers act as load-transferring conduits over growing cracks in failed concrete regions, providing a residual (post-peak) strength, which in turn improves the concrete's ductility and toughness. Such properties are critical, especially in response to the extreme events that create a high loading demand. Thus, this section has been devoted to investigating the post-peak mechanical properties of FRC based on various fiber materials and characteristics, such as diameter, length, and dosage. Polypropylene Fibers Several studies have reported significant increase in the post-crack residual strength and toughness of FRC, as a result of macro PP fiber addition [5,144,146,180]. Cengiz and Turanli [181] tested the toughness, energy absorption, and ductility of shotcrete panels reinforced with steel mesh, steel fibers, and high-performance PP fibers (with a low modulus). While the contribution of high-performance PP fibers to all the measured properties was found promising, increasing the PP fiber dosage beyond 1.1% decreased the ultimate loadbearing capacity, while negligibly increasing the energy absorption and flexural toughness characteristics. In general, a higher PP macrofiber content leads to a better post-crack performance, although, due to the low stiffness of PP fibers, residual strengths tend to be more positively influenced at larger deflections or wider crack openings. Based on the outcome of the past studies, it can be stated that, while there is some evidence that pre-crack mechanical properties of FRC can be modestly improved by PP fibers, the main advantages of adding macro PP fibers are realized after cracks are formed. Nylon Fibers Nylon fibers are utilized to enhance the post-peak characteristics of FRC. When dispersed in low volumes, nylon microfibers have minimal effects on the pre-crack mechanical properties. However, a higher toughness and a more ductile failure mode can be achieved with the addition of nylon fibers [91,[182][183][184]. Zia and Ali [134] investigated the effects of addition of jute, nylon, and PP fibers on controlling the cracks. It was determined that, while the fiber additions improved the overall mechanical properties of FRC, PP fibers outperformed nylon fibers. This superior performance of PP fibers in comparison to nylon fibers was also observed in the total absorbed flexural energy, where PP fibers caused a 100% increase, while nylon fibers led to a 68% increase. This trend is much more significant in the total absorbed energy during splitting tensile tests, which showed a 21% decrease for nylon FRC, whereas PP FRC recorded an 11% increase. Furthermore, it can be understood from the cited study that PP FRC has superior post-peak properties compared to nylon and jute FRC. In a separate effort, Ozsar et al. [135] investigated the use of both macro and micro monofilament nylon fibers in the concrete matrices with two different strengths. Comparing micro and macro nylon fibers, the study found that micro nylon fibers increased the compressive strength of the composite and were most effective in reducing plastic shrinkage cracks, while the macro nylon fibers increased the fracture energy and improved the post-crack performance of the tested mixtures. Polyvinyl Alcohol Fibers PVA fibers have tensile strengths in the same range as steel fibers, however, the elastic modulus of PVA fibers is less than 25% of steel, as reflected in Table 1. Thus, PVA fibers have the ability to only modestly increase the tensile and flexural strengths of hardened concrete, but they more effectively increase the toughness and ductility properties [109]. Many studies have reported that the addition of micro and macro PVA fibers increases the flexural toughness, flexural residual strength, energy absorption, ductility, and impact resistance of concrete. Furthermore, it has been stated that increasing the fiber volume has often made a positive contribution to the aforementioned properties [54,108,109,136]. Shafiq et al. [109] compared the pre-peak and post-peak mechanical properties of FRC containing 1.0-3.0% of PVA and basalt glass fibers. Additionally, the study replaced 10% of cement with metakaolin and silica fume and completed the same tests. It was found that, although basalt glass FRC delivers a marginally higher flexural strength than PVA FRC, the latter has a superior post-peak flexural strength to the extent that FRC with a 3% PVA fiber addition provides deflection-hardening properties. In a separate effort, Hossain et al. [185] investigated the performance of PVA and metallic micro and macrofibers in SCC. It was reported that the incorporation of both fiber types can greatly enhance the fracture energy of SCC mixtures. This enhancement exceeded a 300% increase in the fracture energy of the SCC made with PVA fibers, which was attributed to the molecular bond formed between the individual PVA fibers and the SCC matrix. Polyolefin Fibers Macro PO fibers are typically used to increase the post-crack residual strength of the concrete. They can improve post-crack ductility and limit crack growth, but due to their low modulus, they are often not as effective for low deflections or small crack widths as other fibers with higher elastic moduli [110]. Ramakrishnan [186] described the use of macro PO fibers in bridge decks and barrier rails. It was reported that the addition of fibers at 1.5% volume not only improves the impact resistance and toughness of the concrete, but provides a synergistic effect with the rebar, shifting the cracking pattern from a lower number of wider cracks to a larger number of narrower cracks, which effectively limits the ingress of corrosive agents into concrete. Alberti et al. [110] compared the post-peak properties of the FRC made with 3.0, 4.5, 6.0, and 10.0 kg/m 3 PO macrofiber contents with the FRC made with 26 kg/m 3 of steel fibers. The study stated that, regardless of fiber volume, toughness and ductility increase with the addition of PO fibers, providing improved residual strengths. From the fracture energy results, it was noted that the addition of PO fibers can increase the fracture energy of concrete up to 75% of steel FRC after a 1 mm deflection, i.e., 1/300 of the span length. However, when the deflection increases to 5 mm, i.e., 1/60 of the span length, PO fibers outperform steel fibers by 40%, proving the higher efficiency of PO fibers in high deflections. It should be noted that an admixture for improving the fiber-matrix bond had been used in the cited study for high-volume fiber mixtures. Carbon Fibers FRC made with 0.5% carbon, PP, and steel fibers were tested by Yao et al. [3] for their post-peak properties. Steel fibers drastically outperformed carbon and PP fibers in the residual flexural strength and flexural toughness. Carbon fibers were found to increase the concrete's residual strength, especially at smaller deflections. However, FRC with PP fibers showed a higher residual flexural strength at larger deflections. In addition, Chen and Chung [142] found that the flexural toughness of the FRC made with 0.19% carbon fibers can witness an increase by more than 150%. Polyethylene Fibers Low-strength PE fibers are effective in increasing the post-crack flexural ductility, especially at large deflections [187]. Yamaguchi et al. [118] showed that the addition of high volumes of micro HSPE fibers to concrete significantly increases its toughness and extreme load resistance. Moreover, Soroushian et al. [143] reported that HSPE fibers provide an impact resistance comparable to fibrillated PP fibers at low volumes. In a separate study, Pešić et al. [66] investigated the pre-peak and post-peak mechanical properties of FRC with 0.40%, 0.75%, and 1.25% volume fraction of recycled high-density PE fibers. For this purpose, two series of fibers (with 23 mm and 30 mm length and 0.25 mm and 0.40 mm diameter, respectively) were used. The cited study reported that a satisfactory flexural toughness and residual strength can be achieved by adding the recycled HDPE fibers. The residual flexural strength was found to be higher for the FRC samples made with shorter fibers (i.e., from 25% to 45% of the flexural peak value) and lower for the FRC samples with longer fibers (i.e., from 13% to 32% of the flexural peak value). Furthermore, it was stated that recycled HDPE fibers can be effectively used in the structural concrete, as they exhibit post-peak mechanical properties similar to PP fibers. Polyester Fibers There are only a few studies on the post-peak properties of non-recycled polyester fibers, while there are several studies conducted on recycled PET fibers. Swamy and Barr [144] used 20 mm long non-recycled polyester fibers with a high aspect ratio at volumes up to 1.0% and reported a 100% increase in the impact strength of FRC compared to that of plain concrete. The incorporation of recycled PET fibers in concrete was also found to enhance the post-peak properties of concrete. In particular, it has been reported that PET FRC benefits from high toughness and ductility [70,119,146]. Kim et al. [70] investigated the ductility and ultimate flexural strength of the FRC containing 0.5%, 0.75%, and 1.0% recycled PET and crimped PP fibers (with the length of 50 mm). The cited study showed that recycled PET and PP fibers have a similar performance characteristic and the addition of fibers increased their ductility and ultimate flexural strength significantly. According to Fraternali et al. [146], PET fibers with a higher tensile strength can deliver a higher flexural ductility. Kim et al. [70] reported that the ductility of a full-scale beam made with PET FRC can increase up to 10 times of that of the reference beam with no fibers. Toughness of the PET FRC increased with increasing the fiber volume [119]. Additionally, longer fibers were determined to further improve the toughness characteristics compared to shorter fibers, mainly because of the increased fiber-matrix bond strength [119]. Acrylic Fibers Ductility and post-crack residual strength are known to increase with the incorporation of PAN fibers into the concrete matrix [76,77]. Among the limited studies available, Fan [121] investigated the contribution of PAN microfibers to the post-peak mechanical properties of FRC. In particular, fiber volumes between 0.5% and 2.0% were tested for their influence on impact toughness. The study concluded that the addition of PAN fibers enhances the impact toughness of concrete up to 250%, noting that 1.0% volume of PAN fibers provided the highest improvement. Aramid Fibers Nanni [122] found that a significant increase in the concrete's post-crack residual strength and toughness can be achieved by adding AFRP fibers. The study reported that AFRP fibers greatly outperform PP fibers, while providing benefits similar to steel fibers. It is important to mention that steel fibers are prone to losing a portion of their capacity with the progress of corrosion, while aramid fibers would not undergo any conventional corrosion. Abeysinghe et al. [188] tested twisted Technora fibers with a 40 mm length to investigate their contributions to the extreme load resistance of concrete panels. A 1.0% volume of aramid fibers was found to reduce crack widths and eliminate spalling from such exposures. In addition, a 15% increase in the toughness of RC beams reinforced with 1% of twisted Technora fibers was reported. Silica Glass Fibers Regardless of fiber length and volume fraction, silica glass fibers are found to increase the concrete's ductility and toughness [151,154,155]. Furthermore, splitting tensile and flexural energy absorption of concrete is reported to increase, despite the drop in compressive energy absorption [50,104]. Silica glass fibers have delivered a superior performance in increasing the toughness of concrete compared to nylon fibers, while their performance was similar to PP fibers. On the other hand, steel fibers significantly outperformed glass fibers [50,151]. Arslan [104] used a range of 0.5 kg/m 3 to 3.0 kg/m 3 of silica glass fibers to measure the fracture energy of a set of beam samples. It was reported that adding 1.0 kg/m 3 silica glass fibers has the maximum efficiency by increasing the fracture energy up to 35%, compared to the plain concrete. It was also mentioned that the specimens with silica glass fibers achieved a higher ductility and energy dissipation capacity, compared to the basalt FRC specimens. Basalt Glass Fibers Basalt glass fibers are known to be effective in enhancing the concrete's post-peak mechanical properties. Jalasutram et al. [189] reported that the addition of basalt glass fibers to the concrete can increasingly augment the deformability and flexural toughness of basalt FRC. Furthermore, the addition of basalt fibers has been determined to be more effective in increasing the ductility and crack resistance of FRC, compared to silica glass fibers. This trend is reversed in fracture energy, where silica glass fibers outperform basalt glass fibers [104,154]. Branston et al. [126] compared the effectiveness of chopped basalt filament microfiber bundles to that of BFRP macrofibers (minibars) and concluded that the BFRP macrofibers had a much better post-peak performance. With testing a mixture made with 2.0% volume of 43 mm-long BFRP macrofibers under flexure, an outstanding post-crack performance characterized by high ultimate strength, high residual strength, and initial post-crack strain hardening, followed by gradual strain softening at high deflections, was recorded. In a study performed on BFRP macrofibers [190], it was found that the ratio of the average post-crack residual strength to the first crack strength can reach 0.75 with 2.0% fiber volume and as high as 1.00 with a fiber volume of 4.0%. This clearly reflected that high volumes of BFRP macrofibers can provide superior post-crack performance in FRC products. Patnaik et al. [125] indicated that BFRP macrofibers control the crack widths better than high-tenacity macro PP fibers in the beams subjected to accelerated corrosion and then tested in flexure. This was attributed to the fiber's increased stiffness and the superior bond properties between the impregnating resin and the concrete matrix. Patnaik et al. [160] reported that increasing the dosage of BFRP macrofibers in concrete further increases the post-crack residual strength of FRC. Furthermore, Patnaik et al. [191] investigated the addition of low volumes of BFRP macrofibers and high-tenacity PP macrofibers to the concrete used in bridge decks. The cited study found that BFRP macrofibers are more effective than high-tenacity PP macrofibers in controlling the crack width and increasing the ductility. Shrinkage Water plays a critical role to initiate and help with the hydration reactions required to achieve the desired fresh and hardened properties of concrete. However, the consumption or the loss of water can result in shrinkage, incurring tensile stresses that can exceed the relatively low tensile strength of the concrete matrix [192]. The formation of cracks expedites the transport of corrosive agents, mainly chloride ions and CO 2 , into concrete, endangering the durability of reinforced concrete structures [193][194][195][196][197]. Several methods have been suggested to help concrete withstand shrinkage-induced cracks, including the use of shrinkage-compensating cement [198][199][200] and/or fibers. When fibers are incorporated into a concrete mixture, even in low dosages, they can withstand the induced tensile stresses, preventing the formation of cracks that can endanger the life-cycle performance and durability of concrete structures. Thus, it is imperative to investigate and understand how various fiber characteristics contribute to lowering the shrinkage potential and minimizing the extent of cracks in the concrete matrix, especially at early age. Polypropylene Fibers PP fibers can highly reduce the drying shrinkage cracking of concrete by increasing the capacity of FRC to resist shrinkage-induced strains [201]. The available studies have shown that plastic shrinkage in concrete can also be limited by using PP fibers [202,203]. Furthermore, it has been reported that increasing the fiber dosage can help with reducing (or even eliminating) the shrinkage-induced effects by minimizing the number of cracks and their widths. Fibrillated PP microfibers have a relatively small diameter and high aspect ratio, making them more effective for controlling plastic shrinkage cracks in fresh concrete compared to PP monofilaments [202,204]. Nylon Fibers Nylon fibers have found to be effective when it comes to restricting the propagation of drying and plastic shrinkage cracks in concrete. Nam et al. [205] substituted the natural fine aggregates with recycled aggregates and witnessed an increase in drying shrinkage. To resolve the issue, low volumes of nylon fibers (in the range of 0.1% to 0.5%) were added. The addition of nylon fibers was found to increase the resistance of the recycled aggregate concrete to shrinkage, even above the resistance of the natural aggregate concrete, which had no fibers. Polyvinyl Alcohol Fibers Both macro and micro PVA fibers are reported to be effective in controlling drying shrinkage cracks in concrete. It has been found that PVA fibers added to the concrete at relatively low volumes (below 0.5%) decrease the shrinkage-induced crack widths by 90% for microfibers and 70% for macrofibers. The PVA fibers did not affect the restrained drying shrinkage stress development rate and time of first crack generation. However, they controlled the crack widths once cracks initiated. In a separate investigation [206], pre-crack strength was found not greatly influenced by the addition of PVA fibers, but residual strength was positively impacted. Wongtanakitcharoen and Naaman [51] studied the unrestrained early-age shrinkage of the FRC made with 0.1% to 0.4% addition of PVA fibers. The cited study concluded that PVA fibers controlled the unrestrained early-age shrinkage by 34% (on average), providing an improved performance, compared to carbon and PP fibers with the same volume fractions. Polyolefin Fibers PO fibers of different lengths and aspect ratios are reported to be effective in controlling plastic shrinkage and thermal cracking in concrete overlays. Shorter fibers proved to be most effective for such applications at the same volume dosage [207]. Yousefieh et al. [208] found that controlling the drying shrinkage in the FRC made with a 1.0% PO fiber content is not as effective as that with steel fibers, mainly due to the steel's higher modulus of elasticity. However, the PO fibers were determined to have a better performance than the PP fibers. Furthermore, the crack initiation time was found to be delayed as a result of fiber addition. Carbon Fibers Limited studies have measured the effect of carbon fibers on the shrinkage of concrete. Carbon fibers have shown to be effective in reducing the restrained shrinkage and drying shrinkage cracking potential of carbon FRC [142]. Dopko et al. [116] tested the restrained drying shrinkage of FRC with 0.1%, 0.3%, and 0.5% carbon microfibers. The cited study concluded that, although carbon microfibers show negligible effects on the stress rate and magnitude caused by restrained shrinkage, they can efficiently control the crack opening potential by increasing the tensile strength of FRC. It was also stated that accelerating admixtures (ACC) have an adverse effect on the restrained drying shrinkage of carbon FRC, as captured by the strains recorded during the ring tests. On the other hand, shrinkagereducing admixtures (SRA) were reported to show a great potential for controlling drying shrinkage-induced strains. It was also noted in the cited study that SRA can compensate for the negative effects of ACC on the drying shrinkage. Polyethylene Fibers Pešić et al. [66] investigated the FRC made with recycled PE fibers and found that the total number and the width of plastic shrinkage-induced cracks significantly decreased by the presence of fibers even at low volumes. The study reported that the crack reduction ratio ranges from 34% to 84% for the samples containing 0.40% to 1.25% recycled PE fibers, respectively. The unrestrained drying shrinkage of concrete was also investigated in the cited study. With a 10-15% drop in the strains recorded, it was stated that the reduction achieved was in the same range as that from PP and other synthetic fibers. In another study, Auchey and Dutta [209] investigated recycled high-density PE fibers. The study concluded that the FRC containing this type of fiber performed equal or better than that made with PP fibers in freeze-thaw conditions, suggesting that recycled high-density PE fibers can be a secondary reinforcement alternative for resisting shrinkage and temperature gradient effects. Polyester Fibers Recycled PET fibers are reported to be effective in controlling shrinkage-induced cracks. According to Borg et al. [119], recycled PET fibers can reduce plastic shrinkage cracking under accelerated drying conditions, as well as reduce and delay crack opening under restrained drying shrinkage. Kim et al. [70] added that the time to crack formation under restrained drying shrinkage was elongated with increasing the fiber volume. Pelisser et al. [210] indicated that, among short PP, recycled PET, glass, and nylon fibers, the short PP fibers were best in controlling the crack initiation caused by plastic shrinkage, where the recycled PET and glass fibers showed a similar performance, while the nylon fibers had the weakest performance. This led to recommending short, recycled PET fibers as a promising substitute for PP fibers to limit plastic shrinkage. Acrylic Fibers The addition of PAN fibers minimizes the drying shrinkage cracking potential, regardless of the volume fraction of the fiber, noting that including higher volumes leads to a better performance [77,120]. Fan [121] investigated the effects of PAN fibers on the autogenous shrinkage of FRC. It was reported that a 21.7%, 39.1%, 26.1%, and 17.4% reduction of autogenous shrinkage-induced strains can be achieved in the FRC made with 0.1%, 0.5%, 1.5%, and 2.0% PAN microfibers, respectively. This improvement can be attributed to the contribution of PAN fibers to modifying the FRC's pore structure, which was verified through mercury intrusion porosimetry analyses. Aramid Fibers Zhao et al. [149] investigated 30 mm long Technora aramid macrofibers for their contribution to limiting plastic shrinkage cracking and restrained drying shrinkage at volumes between 0.2% and 1.2%. The addition of 0.4% volume of aramid fibers (and above) was found to eliminate plastic shrinkage cracks. Furthermore, the cited study reported that the addition of 0.8% volume of aramid fibers (and above) can decrease drying shrinkage strains by 15%. Silica Glass Fibers Small additions of glass fibers have been found effective in controlling shrinkageinduced cracks. Barluenga and Hernández-Olivares [150] studied drying shrinkage under both free and restrained conditions. The cited study found that even the addition of very small amounts of micro glass fibers (600 gr/m 3 ) has a significant contribution to reducing the cracked area and the length of cracks in both regular concrete and SCC. It was also concluded that, although increasing the fiber content increases the concrete's ability to withstand drying shrinkage, the efficiency of fibers begins to diminish beyond a certain dosage [150]. Soranakom et al. [211] indicated that the silica glass fiber addition enhances the concrete's crack resistance against drying shrinkage by delaying the time of cracking and lowering the crack width. In the case of restrained plastic shrinkage, Malathy et al. [212] tested different volume fractions of micro silica glass fibers and found that the fibers are very effective in controlling plastic shrinkage, even in the concretes containing silica fume. In a separate effort, Dehghan et al. [156] investigated the effect of recycled silica glass fibers on the drying shrinkage of FRC and concluded that the ability of concrete to withstand drying shrinkage is not improved with this type of fiber. Such an assessment was justified based on the low stiffness of recycled silica glass fibers. Basalt Glass Fibers Branston et al. [126] tested the effects of filament dispersion and bundle dispersion of basalt fibers (up to 0.3% of concrete volume fraction) on controlling free and restrained plastic shrinkage. The cited study reported that basalt fibers are highly effective in improving the concrete's ability to withstand plastic shrinkage by decreasing shrinkage-induced strains and limiting the crack growth. It was found that, regardless of the type, the addition of 0.1% volume fraction of basalt glass fibers can eliminate plastic shrinkage-induced cracks. This volume fraction was reported that can be further reduced by utilizing lower diameter filaments. In particular, filament dispersion was determined to deliver the best performance compared to bundle dispersion and BFRP minibars [126]. Consistent with the reported results, Saradar et al. [159] evaluated the early-age restrained shrinkage of various FRC mixtures and reported that the concretes containing PP and steel fibers had the lowest crack width in comparison to those made with basalt and silica glass fibers. As for the initiation of the first crack, it was found that the mixtures with basalt glass fibers had the earliest crack initiation time. Additionally, it was reported that, as the stiffness of the fibers increases, their flexural strength increases, while their ability to limit restrained shrinkage declines. Extreme Temperature Resistance In a hardened concrete, water can be found in two main phases: free water and bound water, while the latter is further categorized to physically bound and chemically bound water. When concrete is exposed to extreme temperatures, the water inside the concrete evaporates and if the entrapped vapor does not find a way out, it generates an internal tensile stress, which can eventually make the concrete implode and spall. One of the methods used to increase the concrete's capacity to withstand extreme temperatures for an extended period of time is the incorporation of fibers. As discussed in Section 4, the addition of fibers enhances the tensile strength of concrete, which is desirable in resisting extreme temperatures. In addition, synthetic fibers have a relatively low melting point, which can provide an escape route for the entrapped vapor when needed. Noting all these aspects, it is important to understand the potential of each fiber type in enhancing the concrete's performance under extreme temperatures. Polypropylene Fibers PP fibers have a relatively low melting point (from 160 to 170 • C [213]) in comparison to most other concrete fibers and should not be used in high temperature applications, such as autoclave curing [214]. The low melting point of PP fibers gives rise to their applications for spalling prevention during fires in concrete structures. As the fibers reach their melting point during a fire, they provide escape routes for the highly compressed gas caused from the vaporization of moisture inside the concrete [182]. As the water-to-cement ratio decreases, the need for PP fibers to prevent spalling increases. However, shorter fibers have shown a better contribution to fire resistance compared to longer ones. Nylon Fibers Nylon fibers have a melting point between 215 and 265 • C [215]. When exposed to extreme temperatures, ordinary concrete undergoes severe damage; however, experiments have shown that when nylon fibers are incorporated into a concrete mixture, they can reduce the damage and prevent spalling. Additionally, increasing the fiber content leads to an improved protection for FRC against fire and extremely high temperatures [107,182]. Polyvinyl Alcohol Fibers The melting point of PVA fiber is reported to be between 220 and 230 • C [216][217][218]. Heo et al. [107] utilized PP, PVA, and nylon fibers with different lengths and volume fractions in concrete to evaluate their efficiency when exposed to elevated temperatures. The cited study found that PVA fibers were helpful in controlling the spalling and retaining the compressive strength of FRC. PVA fibers provided a better spalling control than PP fibers, however, nylon fibers outperformed PVA fibers, due to the presence of higher number of fibers per unit volume of nylon fibers. Moreover, it was reported that, for the concrete specimens that did not experience a severe damage by extreme temperature, PVA-containing samples retained a higher residual compressive strength than those with PP fibers [107]. Polyolefin Fibers PO fibers have a (relatively) low melting point, which is reported to be 150 • C [219]. Hence, when exposed to fire, the PO fibers tend to melt quickly and make escape channels for water vapor. Therefore, they have the potential to enhance the concrete's ability to prevent or delay spalling, while maintaining a higher strength after an extreme heat exposure. Carbon Fibers There are limited studies concerning the fire resistance of carbon FRC. Chen and Liu [220] compared the performance of carbon, steel, and PP fibers in high-strength concretes exposed to high temperatures (up to 800 • C). The cited study reported that the samples with no fibers experienced explosive spalling. The addition of carbon and steel fibers delayed the spalling, while the addition of PP fibers completely eliminated it. These observations were attributed to the behavior of fibers exposed to elevated temperatures. Carbon and steel fibers can delay initiation and propagation of microcracks and microdefects, owing to their (relatively) high elastic modulus. However, PP fibers melt due to their low melting point, generating evacuation channels for water vapor inside the concrete. Additionally, the cited study measured the residual compressive and splitting tensile strengths of the specimens exposed to extreme temperatures and found that concretes with carbon and steel fibers retained higher values of compressive and splitting tensile strengths, compared to those with PP fibers [220]. Polyethylene Fibers PE fibers have a low melting temperature, i.e., 130 • C for regular HDPE fibers and 150 • C for gel spinned HDPE fibers [22,221]. Therefore, when PE FRC is exposed to extreme temperatures, the fibers tend to melt and generate empty channels, which can help with water evaporation, thus, improve the performance of concrete after fire exposure. Sukontasukkul et al. [222] states that the pre-peak responses of plain and FRC concrete were similar after fire exposure, but as for post-peak properties, exposure temperature and fiber type are the two main parameters that influence the performance of FRC, especially if the temperature goes above 400 • C [222]. Polyester Fibers PET fibers have a low melting point of 160 • C, as reported by Sadrmomtazi and Tahmouresi [223]. Therefore, they tend to melt when exposed to high temperatures [74]. Increasing the PET fiber content enhances the concrete's ability to withstand the negative effects of severe temperatures. Song et al. [224] found that lowering the fiber's diameter can enhance the FRC's potential to tolerate fire effects. In a separate study, Choi et al. [225] investigated the effect of PET fibers on reducing the spalling of high-strength concretes exposed to two different fire curves (i.e., RABT and ISO834) and reported that 0.2% addition of PET fibers can completely eliminate the spalling of concrete in both heat exposure conditions. Acrylic Fibers Mo et al. [120] investigated low volumes (below 0.2%) of acrylic microfibers in the lightweight oil palm shell concrete that contained ground granulated blast furnace slag. The study observed that the acrylic fibers were effective in preserving the concrete's strength after heat exposure, owing to the low melting point of the fibers, allowing the entrapped gas to escape the concrete. The melting point of the PAN fiber was reported to be 145 • C in Mo et al. [120], while Moody and Needles [226] indicated that the PAN fiber's softening point is between 190 and 250 • C. Aramid Fibers Aramid fibers are reported to start their tensile strength loss at 200 • C and they completely lose their strength when the temperature reaches 400-500 • C, which is a relatively high tolerance compared to other fibers [22]. Therefore, considering the concrete's thermal conductivity, aramid fibers do not melt when exposed to fire, unless a long exposure to heat occurs. Consequently, aramid fibers can help FRC further maintain its strength through bridging the cracks even under fire. Silica Glass Fibers Mirza and Soroushian [155] compared the performance of the FRC made with silica glass fibers to the plain concrete after exposure to high temperatures. It was reported that since silica glass fibers do not melt, they help the concrete maintain a (relatively) high flexural strength after heat exposure since the fibers prevent cracking initiated by the stresses from water vapor and subsequent cooling shrinkage. Basalt Glass Fibers Basalt glass fibers have an outstanding thermal stability, which is reflected in a study conducted by Sim and Park [227]. In the cited study, the FRC mixtures that contained silica glass, carbon, and basalt glass fibers were heated for two hours at various temperature levels, including 100, 200, 400, and 600 • C. It was found that the mixtures made with silica glass and carbon fibers lose more than 40% of their strengths, while basalt FRC preserved 90% of its strength. Additionally, when the fibers were heated to 1200 • C, it was observed that silica glass and carbon fibers totally melted, while basalt glass fibers retained their geometry and mechanical integrity [227]. Synthesis and Recommendations The addition of fibers can greatly alter many fresh and hardened properties of concrete, depending on the fiber's chemical and physical characteristics. Based on several investigations reviewed in the current study, this section provides a synthesis of the most common trends and observations reported on the performance of fibers in each of the categories of stability and bond, workability, pre-peak and post-peak mechanical properties, shrinkage, and extreme temperature resistance. Additionally, to help the researchers and engineers in the concrete industry with the selection of fibers, the performance of each of the fiber types in each of the performance categories has been qualitatively ranked as weak (W), fair (F), good (G), or excellent (E) in Table 3. For the combinations that had very limited information available in the literature, not enough information (NI) has been listed. W: weak, F: fair, G: good, E: excellent, and NI: not enough information. Stability and Bond Portland cement concrete provides a highly alkaline environment, with a pH value as high as 13.5, which protects steel rebars from corrosion. Such an environment is proven to cause deterioration in some fiber types. Therefore, ensuring their long-term stability is of paramount importance. Among the fibers investigated, both silica and basalt glass fibers have shown to be significantly degraded in the concrete matrix, reflecting the need to employ alternative methods to enhance the alkalinity tolerance of the glass-based fibers. Such methods range from the coating of the individual fibers with alkali-resistant materials to covering the fibers with polymer resins. In addition, aramid fibers, especially the acid spun type, are reported to undergo a notable degradation in concrete. Furthermore, PAN and PET fibers are susceptible to some levels of degradation in alkaline environments. The other reviewed fibers, however, have shown great chemical stability in concrete. When FRC is subjected to external loads, fibers tend to fail by pull-out and/or rupture. The occurrence of these two modes of failure is a function of fiber's elastic modulus and fiber-matrix bond. As the modulus of elasticity decreases and the fiber-matrix bond increases, the majority of fibers tend to fail due to rupture. The bond between the individual fibers and the concrete matrix is often governed by the concrete's properties, such as waterto-cement ratio, and fiber characteristics, such as material, length, and shape. One of the main reasons for the addition of fibers to a concrete mixture is their ability to enhance the concrete's post-peak mechanical properties, in terms of ductility and residual strength. Thus, a high fiber-matrix bond that leads to a sudden fiber rupture is not favorable. On the other hand, a weak fiber-matrix bond introduces other issues, particularly with limiting the capability of fibers to bridge the cracks. Therefore, an optimum bond is desired for fibers to have the best efficiency. PVA fibers form a hydrogenic bond with the concrete matrix, while the other fibers primarily have a mechanical bond. Thus, PVA fibers tend to fail due to rupture and a high fiber-matrix bond is expected from them. Nylon, PAN, and PO fibers are reported to make a relatively strong bond with the concrete, due to their swelling and increase of friction, irregular cross-section, and damage during the mixing process, respectively. In contrary, PE fibers form a relatively weak bond with the concrete matrix, presenting a challenge to its use in FRC. Several methods have been attempted, with various degrees of success, to increase the fiber-matrix bond. Among the examples are using the fibrillated form, changing the shape of individual fibers, and adding coating materials. Workability Regardless of the dosage and characteristics of fibers, the addition of fibers decreases the workability of concrete. This is further exacerbated by increasing the fiber content. Longer fibers can cause a higher friction with fresh concrete, further reducing the workability of FRC mixtures. However, it should be noted that, in a fixed volume fraction, shorter fibers cause a higher decrease in workability, due to their higher surface areas. Studies have shown that using proper admixtures, such as water reducers and pozzolans, in addition to employing appropriate mixing equipment, increase the maximum amount of fibers that can be included in the FRC without causing workability issues. PVA fibers can significantly decrease the workability of FRC due to their water absorption characteristics. Nylon fibers are also reported to have some water absorption. While this helps them further disperse in the concrete at low volume fractions, a significant reduction in the workability is observed at high volume fractions. Compared to other fiber types, basalt glass fibers are reported to mix well with concrete, mainly due to having a density similar to the concrete's density. Pre-Peak Mechanical Properties The studies available in the literature have reported sometimes contrary observations regarding the effects of fiber addition on the compressive strength of FRC. This can be attributed to different properties of concrete and fibers used in the experiments. When it comes to compressive strength, dense packing plays a key role to ensure that the entrapped air is minimum. Therefore, regardless of fiber type, using short fibers with a low dosage can lead to an increase in the compressive strength. This can be achieved well, especially in the concrete mixtures made with a high water-to-cement ratio. On the other hand, fibers have been consistently reported as a proper addition to improve the concrete's splitting tensile and flexural strengths, where macrofibers have led to more pronounced improvements in comparison to microfibers. Increasing the fiber content can result in improved splitting tensile and flexural strengths, as long as fibers are well dispersed. In general, the higher the fiber's modulus of elasticity, the better it can enhance the splitting tensile and flexural strengths of the concrete. Comparative studies have shown that glass fibers can improve the flexural strength of concrete more than PO fibers, while they are both more efficient than high-tenacity PP and nylon fibers. High-modulus carbon and aramid fibers also greatly help with augmenting the splitting tensile and flexural strengths of the concrete, as long as the required fiber-matrix bond is achieved. Post-Peak Mechanical Properties One of the main limitations of the plain concrete is its brittle behavior after reaching the ultimate strength, which is captured as a peak and then a sharp drop in the stress-strain curve. In order to overcome this drawback, fibers are incorporated into the mixture to enhance the concrete's post-peak mechanical properties, which span ductility, toughness, and residual strength. Even low volumes of fibers have proven effective in improving all of the post-peak mechanical properties of the concrete, except for some reported cases regarding the compressive energy absorption. The reason for the overall positive contribution of fibers is their ability to bridge over cracks, facilitating the transfer of loads from one end to the other end of the crack. Therefore, higher fiber contents provide more bridging pathways, which help with conveying more stresses, thus, further increasing the post-peak mechanical properties of FRC. The past studies have shown that microfibers are more effective in controlling microcracks, whereas macrofibers deliver a better performance in limiting macrocracks. Therefore, macrofibers can be more efficient in enhancing the post-peak properties of the concrete since macrocracks are generated after the peak of the stress-strain curve is reached. High-modulus fibers, such as carbon and aramid, followed by glass fibers, have shown a great potential to improve the post-peak mechanical properties of FRC. On the other hand, the contribution of nylon fibers to the post-peak mechanical properties is expected to be the least compared to PP, PVA, and HSPE fibers. Shrinkage Since concrete has a low tensile strength, shrinkage-induced cracks caused due to water consumption and/or loss can be a great concern for long-term durability. This is because when the concrete loses water as a result of excessive evaporation, internal tensile stresses are generated with the possibility of exceeding the concrete's tensile strength, leading to the formation and propagation of cracks. To address the issue of shrinkageinduced cracks, adding fibers to the concrete has been investigated in several studies. Regardless of their material characteristics, fibers can reduce the crack width and number of cracks, while delaying the time of the first crack. Additionally, it has been found that the higher the fiber content, the lower the shrinkage-induced cracks to the extent that such cracks can be entirely eliminated. The fiber aspect ratio is one of the important factors, affecting the performance of FRC subjected to shrinkage. In general, fibers with a high aspect ratio have a better performance than those with a low aspect ratio. This can be confirmed with the great potential of fibrillated PP fibers to limit shrinkage-induced cracks. Extreme Temperature Resistance When concrete is exposed to extreme temperatures or fire, the water inside the concrete matrix evaporates, generating an inside vapor pressure in the form of tensile stresses that can lead to the explosion and spalling of concrete. However, the addition of fibers is proven to be an efficient way to reinforce the concrete against extreme temperatures. Fibers enable FRC to withstand high temperatures with two methods: firstly, fibers with a high melting temperature (e.g., carbon, aramid, and glass fibers) can increase the tensile strength of concrete. Therefore, they can delay the explosion time and help concrete retain a high residual strength after the fire. Secondly, fibers with a low melting point (e.g., PP and PE fibers) melt during the fire, forming internal channels that serve as evacuation pathways for water vapors. If a proper combination of fiber dosage and length is employed, the concrete's explosion and spalling can be minimized or even eliminated. The available studies suggest that the FRC products with the fibers that fall into the second category have a better performance overall, when exposed to elongated extreme temperatures. Conclusions In this review study, a variety of fibers have been investigated for their contribution to different fresh and hardened properties of concrete. Additionally, they have been compared with each other in order to signify their potential in altering each of the properties of interest. From this holistic investigation, the capabilities and limitations of each fiber type can be summarized as: • PP fibers are one of the most cost-effective concrete fibers. This advantage, paired with an excellent chemical stability in the concrete environment, satisfactory mechanical properties, and wide availability, has made the PP fibers one of the popular concrete fibers. The fibrillated PP microfibers are primarily used for plastic shrinkage crack control, while the monofilament PP macrofibers are employed for controlling the cracks caused by external loads, temperature gradients, or drying shrinkage. • Nylon fibers can absorb the mixing water, and in turn, reduce the workability more than other concrete fibers. These aspects limit the application of nylon fibers to relatively low fiber volumes, especially if microfibers are used. Another limitation of nylon fibers is that, while they provide advantages similar to PP fibers in concrete, they are, in general, more expensive. Increasing attention to recycled nylon fibers can help decrease the unit cost of nylon fibers with the possible use for thermal and plastic shrinkage crack control purposes. • PVA fibers form a strong chemical bond with the concrete matrix, increasing the possibility of fiber rupture under external loads, which is not a favorable feature where an increase in post-peak mechanical properties is needed. This feature, along with the relatively high cost of PVA fibers and their significant water absorption, which decreases the mixture's workability more than other fibers, can limit the application of PVA fibers in FRC products. Thus, they are not as readily available as other less expensive synthetic fibers. • PO fibers have a relatively low elastic modulus, similar to PP fibers, causing a relatively low residual strength at small crack widths. Most concrete fiber suppliers provide some forms of PO fibers, as they work well for crack controlling purposes. PO fibers are relatively inexpensive and fall in the same price range as PP fibers, making them one of the least expensive concrete fibers. However, they are not well represented in the literature, most likely due to the absence of a widespread need to them in practice because of the popularity and abundance of PP microfibers. • Carbon fibers can withstand the alkaline environment of concrete better than glass fibers. They also have a (relatively) high strength-to-weight ratio. However, due to the issues observed during the mixing process of carbon macrofibers with conventional methods, they are not commonly used, especially in the mixtures that contain coarse aggregates. Further to the high price of most carbon fibers, they are often less effective than other synthetic fibers for several concrete applications. Thus, carbon fibers are considered an expensive specialty fiber in the concrete industry. • As a result of low strength and stiffness, the FRC products made with conventional PE fibers can suffer from poor mechanical properties. However, high-strength and high-stiffness PE fibers have shown satisfactory mechanical properties with a potential to be used in cementitious composites. However, HSPE fibers are not very practical in conventional concrete applications, mainly because of their (relatively) weak bond with concrete. • The use of PET fibers in FRC has been limited to laboratory tests and research investigations at the time of this review. It is expected that, as the production technology and product quality of recycled PET fibers improve in the future, these fibers gain traction in the concrete industry, owing to their economic and environmental benefits over traditional synthetic fibers. • In the category of acrylic fibers, PAN microfibers have been found to offer effective solutions, as they can provide benefits similar to other low-strength/modulus fibers. However, compared to other common synthetic fibers, the literature suggests that acrylic fibers more adversely affect the workability of the concrete mixture, while they can provide an increased fiber-matrix bond strength, along with a significant residual strength at small crack openings. The limited general use of PAN fibers in the concrete mixtures that contain coarse aggregates is likely because other less expensive synthetic fibers can provide similar benefits, especially in the absence of acrylic macrofiber production. • The main drawback of aramid fibers for FRC applications is their cost. Since aramid fibers are relatively expensive and may not provide enough additional benefits over other common concrete fibers, their use has been limited, which can justify the limited number of relevant studies available in the literature. However, recent works on Kevlar and Technora fibers have shown promising reinforcing potential, which can be further utilized, especially if the cost drops. • AR silica glass fibers are able to significantly improve the strength and ductility of concrete, owing to their relatively high strength and stiffness. There are inconsistencies in the workability reported for the FRC mixtures made with AR silica glass fibers, but this can be attributed to the fact that the fibers can come in a wide range of sizes and surface areas. Degradation of silica glass fibers in concrete is often a concern, which can be addressed with an adequate zirconia content, proper sizing application, concrete binder adjustment, or even polymer impregnation. The AR silica glass fibers are effectively used as concrete fibers in various applications with a wide availability and relatively low price. • Basalt glass fibers have shown degradation issues in the concrete's alkaline environment. Similar to silica glass fibers, various actions have been taken to increase their long-term stability in concrete. Basalt microfibers are known to be effective for increas-ing the splitting tensile and flexural strengths of FRC, further to reducing the plastic shrinkage cracks. BFRP macrofibers have shown a great potential as an effective solution to improve the post-crack performance of FRC and control the propagation of cracks. The basalt glass fibers are anticipated to gain popularity in the concrete industry, as the production increases and new applications are identified. FRC products are becoming more popular in the concrete industry and are anticipated to continue to grow as researchers, engineers, and contractors become more familiar with designing, mixing, and placing FRC, while the new codes and standards for structural concrete accept the strength and service life benefits that can be gained from adding randomly dispersed discrete fibers to the concrete. Holistic investigations, similar to the investigation presented in the current review study, are expected to pave the way to make informed decisions regarding the fiber types of choice, depending on not only their cost and availability, but also the fresh and hardened properties of FRC desired for a wide variety of concrete applications.
31,847.2
2021-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Taxonomy on IoT Technologies for Designing Smart Systems — The Internet of things(IoT) is now considered among the most emerging technologies aiming to interconnect heterogeneous smart devices in several areas without human interaction. From sensing and identification to data processing and storing. The Building and the monitoring of smart systems like smart homes, smart campus and smart cities pass through several stages on which different tools and platforms coexist to answer this need. In this paper, we present a survey on recent works using IoT tools to build their systems. Particularly, we classify and compare them according to layers and characteristics that we have defined such as data acquisition, data processing , software used, conformity to standards. Hence , our study aims to provide a clear vision on available functionalities during the process of building IoT systems based on most used IoT tools and to help users to choose the most adapted tool depending on their needs. Introduction The Internet of Things (IoT) is above all a concept that allows the Internet network to be extended to everyday objects. This technology allows data transfer between different connected objects without human intervention. The devices can then interact with each other in a completely autonomous way. The IoT sector is growing and the number of connected objects is becoming increasingly important. From the watch to the shower head, everyday objects are increasingly derived from connected objects for a better adaptation to their user. This is a technological breakthrough leading to major changes in our lifestyle. The bridge between the different machines is then done without human interaction to adapt to the needs and the environment. The development of these functionalities is possible thanks to the optimization of new electronic technologies such as RFID, WSN and associated sensors. These various sensors allow the retrieval and storage of user data [1]. Many solutions including hardware, software and protocols are proposed everyday to enlarge IoT market offer. The choice of the right tools has become a challenging task due the variety of solutions categories. Besides , other constraints can be faced during the implementation of these smart systems such interoperability of the connected devices, availability and security concerns [2]. The paper is organized as follows : section 2 presents an overview of the IoT system followed by the main protocols and issues in smarts systems. Section 3 presents the goal of the study where we focus on the process of the building an IoT system, more precisely, we presents most used prototyping tools, middleware's and platforms. Section 4 presents real academic implementations of smart systems. Finally, we conclude the paper with a conclusion and future works. Background on IoT technologies and major issues In this part, we will present, the concept of internet of things, most used protocols and some major issues. Basics The Internet of Things(IoT) is a scenario in which objects, animals and people being assigned unique identifiers, as well as the ability to transfer data over a network without requiring any human-To-human or human-to-machine interaction. In general, the main components of IoT system can be presented in 3 parts : a) Hardware: The hardware part stands for all the equipments required to: • Collect data and interact physically with environment, such as sensors, tags, cameras... • Read data gathered from sensors and execute a primary data processing, such as readers , microcontrollers... • Filter, synchronize, aggregate and store data, such as terminals and servers. b) Software: The software part is responsible of processing data gathered from the hardware part. It can be a middleware , platform and operating system... c) Protocols: Protocols represent the communication part between the whole IoT system components. Authors in [3] gave a detailed information about protocols and standards. We have summarized the most used ones in table 1 : Issues Despite of its several benefits. There are many challenges that are a subject of research. Among the major issues that IoT includes, we find availability , interoperability and security: a) Availability: Many fields using ICT technologies rely strongly on the availability of all components on the system. For example, verification of identity using RFID communication , if the RFID reader is down the employee ill blocked ,another serious example is the payment using NFC technology, if your smartphone is out of energy you will not be able to pass the transaction. There is some solutions trying to deal with the issue as the Blockchain infrastructure proposed in [4]. Interoperability: Due to the emerging of millions of devices connected in the whole network, and also the problematic of coexistence of different standards, many devices cannot be recognized from a system to other. Therefore, the communication between sensors and the backend will be impossible. Many Projects as ReAA and OMO.IN.STA.NT try to responds to this need, but a real standing solution does not exist as argued in [5]. b) Security: Security represents a serious concern in IoT infrastructure. The main problem is to guarantee the safety of IoT components and people privacy during all process of data exchange. In other words, Sensors , data processing unit and servers should be kept confidential, unchangeable and available (CIA). Depending on each layer(physical layer, communication layer or backend layer) of the system threats and countermeasures are proposed in [6]. Motivation and design goal As described in section 1. A typical IoT system is composed of the hardware part , software and communication part. In this work, we aim to give some details about the platforms while designing an IoT system. This work will help us to decide what is the appropriate software to use for our context. We believe that the data flow of information in a typical IoT system pass per several steps as shown in figure 1. IoT tools For designing an IoT system, several software can be found in the market and the research field, we try in this section to present most known software and tools. We can classify them in 3 categories : Prototyping tools. When you are creating an IoT application, a prototyping tool is the solution for generating the first small physical model of the system. In fact, this step is primordial to test the atom functionalities of your application, measure and realize performance test and also analyze the feasibility and connection between the system components. The prototyping tools are generally composed of the hardware part containing the process unit interacting with data source devices as sensors and software part dedicated for developers to read data and develop application which will be deployed inside the prototype's main processing unit. Table 2 shows an example of some IoT prototyping tools. Middleware. The middleware is considered as the interface between the business application and the hardware layer, in case of the internet of things, the middleware will act as the orchestra between backend servers and the data collection layer, In fact, the middleware will give the orders of reading data from the sensors, act on the physical environment if there is any actuators, filter and process the collected data and then send it to the backend part for further treatment. The problematic here is the emergence of various types of middleware everyday with different programming abstraction level and architecture of accessing to connected IoT devices [10]. Table 3 shows brief overview of some known middleware. Platforms. A platform is seen as the global level of implementing an IoT application, regrouping all the developer's necessary tools from designing , developing , deploying, storing and data analyzing. Data management infrastructure remain among the most criteria searched by companies while choosing the appropriate platform [2]. Table 4 shows an example of IOT full platforms. Literature implementations In this section, we present an overview concerning recent implementations of smart systems , more importantly, we take into consideration the full process of designing an IoT system. For each work, we consider some parameters to take into account during the process of building the authors applications, the following criteria are taken into consideration : • Data acquisition : specifies the source of data, it can be sensors in case of a wireless sensor network, tags in case of RFID communication or other technologies. • Communication / protocols : As indicated before, many protocols are supported during the process of exchanging data between the data acquisition layer(Processing unit) and the backend layer. • Processing unit : In case of using a prototyping tool, this field specifies the hardware used for local processing data. • Software : As argued in section A), we determine if any indicated platform or middleware are used to design and run the developed application. • Compliance to IoT standard : Compliance to standards is among the important issues in IoT application, for this reason, we verify in each work, if this point has been taken into account by the author. Table 5 presents some recent works that regards IoT smarts systems, The goal of this study is to provide researchers with most used IoT tools while designing their smart systems. As sketched in the previous sections, the first step in a smart system is data acquisition which can result from a single sensing component as pi-camera used in [16] to detect the empty parking spaces or a combination of several data acquisition components to identify, locate and sense such as tags, beacons and sensors respectively. The use of specific protocols and communication technology is justified for each system architecture , context of use and component which will sends gathered data to the processing unit. All this protocols aim to one primordial gain which is reducing power consumption. For instance, author in [21] used beacons to detect buzz in the campus area. Here, the proximity is calculated based on the BLE communication technology which gives a high accuracy of proximity estimation with reasonable range field and low power usage compared to NFC and GPS. This last is managed and exploited by the software and more precisely the middleware dedicated to combine, process, filter and dispatch data for other treatment in the local or in the cloud network. Collecting data from one category of data acquisition components can be handled with the dedicated software of the processing unit , for example, author in [18] used arduino for interacting between sensors, actuators and the back-end layer. The problematic occurs when the data comes from different sources of components as designed in the paper [20], in which author collect data from tags, camera, fingerprint sensor and smart-card. Each technology listed in this work use its own SDK and interface to interact between between the data acquisition layer and the backend-layer which justify the design of specific middlware that can synchronise between the several tehcnologies. According to our vision, the designed system in [20] can be presented as in the figure 2. Conclusion In this work, we have presented a survey of IoT tools aiming to design a smart system, In particular, we explain basics of implementing an IoT system. More importantly we clarify the ambiguity concerning tools used for this need. In addition we com-pare some examples of most used tools to facilitate the task of choosing tools to the appropriate context. Moreover, we have completed our work by studying some real academic implementations , based on some criteria that we have defined to provide researchers by models and prototyping methods used to design and implement and a smart system. As a future work, we aim to use IoT tools based on this study to design and implement our system concerning a smart application dedicated for student managements.
2,693.6
2018-09-29T00:00:00.000
[ "Computer Science" ]
Overexpression of g -Sarcoglycan Induces Severe Muscular Dystrophy IMPLICATIONS FOR THE REGULATION OF SARCOGLYCAN ASSEMBLY* The sarcoglycan complex is found normally at the plasma membrane of muscle. Disruption of the sarcoglycan complex, through primary gene mutations in dystrophin or sarcoglycan subunits, produces membrane instability and muscular dystrophy. Restoration of the sarcoglycan complex at the plasma membrane requires reintroduction of the mutant sarcoglycan subunit in a manner that will permit normal assembly of the entire sarcoglycan complex. To study sarcoglycan gene replacement, we introduced transgenes expressing murine g -sarcoglycan into muscle of normal mice. Mice expressing high levels of g -sarcoglycan, under the control of the muscle-specific creatine kinase promoter, developed a severe muscular dystrophy with greatly reduced muscle mass and early lethality. Marked g -sar-coglycan overexpression produced cytoplasmic aggregates that interfered with normal membrane targeting of g -sarcoglycan. Overexpression of g -sarcoglycan lead to the up-regulation of a - and b -sarcoglycan. These data suggest that increased g -sarcoglycan and/or mislocalization of g -sarcoglycan to the cytoplasm is sufficient to induce muscle damage and provides a new model of muscular dystrophy that highlights the importance of this protein in the assembly, function, and downstream signaling of the sarcoglycan complex. Most Muscular dystrophy is a genetically heterogeneous disease, but loss of function mutations in the genes encoding the dystrophin-glycoprotein complex (DGC) 1 contribute significantly to the muscular dystrophy phenotype. Therapy aimed at restoring the missing proteins in muscle has been initiated in humans with these disorders (1). Sarcoglycan is a subcomplex within the DGC, and mutations in ␣-, ␤-, ␥-, and ␦-sarcoglycan genes have been described (2). Most sarcoglycan gene muta-tions destabilize the entire sarcoglycan complex at the plasma membrane of muscle myofibers and cardiomyocytes (3)(4)(5)(6)(7). Expression of dystrophin and sarcoglycans seems to be tightly regulated. Primary mutations in dystrophin lead to drastic reduction of dystrophin-associated proteins including sarcoglycans (8 -10). Replacement of missing DGC components can be achieved with transgenesis or viral-based approaches to evaluate the assembly and function of the DGC. Transgene-mediated expression of dystrophin in the mdx mouse, a mouse model for Duchenne muscular dystrophy (DMD), eliminates dystrophic symptoms (11), and transgenic rescue of dystrophin expression has been used to identify structure-function relationships within the DGC. In these studies, expression or overexpression of dystrophin from the tissue-specific muscle creatine kinase (MCK) promoter was not associated with toxicity (11). However, less is known about sarcoglycan replacement, and unlike dystrophin, the sarcoglycans are transmembrane proteins that assemble within the secretory pathway. Restoration of proper targeting and assembly of the sarcoglycans may involve additional regulatory aspects than those required for expression of cytoplasmic proteins. Adeno-and adeno-associated viruses carrying normal genes have been successfully used to rescue skeletal muscle defects in mice lacking dystrophin or sarcoglycans (12)(13)(14)(15). These studies describe the efficiency of protein expression as well as the immune response generated by the introduction of the transfer viruses and the vectors carrying the gene. The dose response of treatment, including studies of the minimally effective dose and the potential for toxicity related to overexpression, has not yet been explored fully. With the initiation of phase I clinical trials using gene therapy for limb girdle muscular dystrophies (1), it is extremely important to determine the risks and benefits of this potentially very powerful approach. In this study, we generated transgenic mice expressing different levels of ␥-sarcoglycan under the control of the MCK promoter. We found that mice carrying a high copy number of the ␥-sarcoglycan gene, concomitant with marked overexpression of ␥-sarcoglycan protein, showed profound muscular dystrophy as revealed by extreme muscle wasting and premature death. Histopathology showed muscular dystrophy with variable fiber size, centrally placed nuclei, and increased fibrosis. In addition, fast fibers seem to be preferentially affected. In muscle overexpressing ␥-sarcoglycan, we found that ␥-sarcoglycan failed to reach the cell membrane and was associated with intracellular aggregates. The remaining sarcoglycan subunits were targeted to the sarcolemma; however, ␣and ␤-sarcoglycans were significantly up-regulated at the muscle membrane. These findings suggest that overexpression of ␥-sarcoglycan and misregulation of ␣and ␤-sarcoglycan can cause muscular dystrophy. EXPERIMENTAL PROCEDURES Generation of MCKgsg Transgenic Mice-Full-length mouse ␥-sarcoglycan (nucleotides 152-1027 of GenBank TM accession number AF282901) was obtained using mouse skeletal muscle cDNA as template and amplified with the following primer pairs: gsgF, 5Ј-AGAAGCTTGCCATGGTTCGAGAGCAGTAC-3Ј; gsgR, 5Ј-TAGGATC-CTCAAAGACAGACGTGGCTG-3Ј. The polymerase chain reaction product was digested with HindIII and BamHI. The digested polymerase chain reaction product was then ligated to the pBluescript II KS (Stratagene, La Jolla, CA) that was treated with HindIII and BamHI. Polyadenylation and termination signals of bovine growth hormone from pcDNA3 (Invitrogen, Carlsbad, CA) were added at the XbaI site. Muscle-specific creatine kinase promoter was amplified with the following primer pairs: MCK-F, 5Ј-ATCTCGAGCAGCTGAGGTGCAAAAG-GCT-3Ј and MCK-R, 5Ј-ATAAGCTTGGGGGCAGCCCCTGTGCCCC-3Ј (16). The resultant 1.35-kb fragment was inserted into the XhoI and HindIII sites. Sequences were verified using cycle sequencing. After digestion with BssHII, the prokaryotic vector was separated from the transgene by sucrose gradient (17). Transgenic mice were generated by microinjection of transgene DNA into the pronucleus of fertilized singlecell BL6/DBA embryos as previously described (18,19). Southern, Northern, and Immunoblot Analysis-The MCKgsg transgene was detected in transgenic mice by Southern blot analysis of tail DNA with ␣-32 P-labeled DNA probe containing the full-length ␥-sarcoglycan coding region (nucleotides 152-1027 of GenBank TM accession number AF282901). Genomic DNA was predigested with BamHI and HindIII. Signal strength was quantified using ImageQuant software and a STORM 860 phosphorimager (Amersham Pharmacia Biotech). Transgene copy number was determined by comparing the ␥-sarcoglycan transgene and endogenous ␥-sarcoglycan gene. MCKgsg mRNA expression was detected with the same full-length ␥-sarcoglycan probe as described above. The following probes to murine sarcoglycan sequences were also used: ␣-sarcoglycan cDNA coding region (nucleotides 254 -1417, GenBank TM accession number AB024920); ␤-sarcoglycan partial cDNA coding region (nucleotides 325-834, GenBank TM accession number AB024921). Total RNA was isolated using TRIzol reagent (Life Technologies, Inc.). Fifteen g of total RNA was loaded in each lane. For protein analysis, 50 g of total protein was resolved on a 4 -12% Tris-glycine gradient gel (Invitrogen BV/NOVEX, Groningen, Netherlands). Immunoblotting was performed using the following primary antibodies: monoclonal anti-␣-and -␤-sarcoglycan antibodies (NovoCastro Laboratories, Newcastle upon Tyne, UK), polyclonal anti-␥-and -␦-sarcoglycans (6), polyclonal anti-dystrophin antibody (7), and monoclonal anti-myosin heavy chain antibody (Developmental Studies Hybridoma Bank, University of Iowa, IA). Horseradish peroxidase-conjugated goat anti-rabbit and goat anti-mouse secondary antibodies (Jackson ImmunoResearch, West Grove, PA) were used. Staining for Ca 2ϩ -dependent ATPase Activity-Fiber types (fast and slow fibers) were distinguished using the procedure as described previously (20). An alkaline preincubation at pH 9.4 was used. Generation of Transgenic Mice Overexpressing ␥-Sarcoglycan-The MCKgsg transgene was engineered by placing the full-length mouse ␥-sarcoglycan cDNA sequence under the control of the 1.35-kb MCK promoter (16) followed by bovine growth hormone termination and polyadenylation signals (Fig. 1A). To assess the potential toxicity of high level ␥-sarcoglycan expression, four lines of transgenic mice were selected for this study. Two lines carried low copy numbers of the transgene, and two lines carried high copy numbers of the transgene. Using a full-length ␥-sarcoglycan cDNA sequence as a probe, MCKgsg lines 1 and 3 carried approximately three and four copies, respectively, when compared with the endogenous ␥-sarcoglycan gene (Fig. 1B) and are considered low copy transgenic mice. MCKgsg lines 2 and 4, considered as high copy transgenic mice, carried ϳ57 and 36 copies of the ␥-sarcoglycan gene, respectively, relative to the endogenous gene (Fig. 1B, asterisk). Northern blot analysis showed the transgenic ␥-sarcoglycan mRNA at the predicted size of 1.3 kb (Fig. 1C, arrow). MCKgsg line 2 (high copy) exhibited more mRNA expression than MCKgsg line 1 (low copy), and both showed considerably more mRNA than the endogenous 1.6-kb ␥-sarcoglycan mRNA (Fig. 1C, asterisk). The MCK promoter expressed ␥-sarcoglycan mRNA in both cardiac and skeletal muscle, although lower levels of mRNA expression were seen in cardiac muscle (Fig. 1C). Immunoblot analysis showed that ␥-sarcoglycan protein expression correlated with gene copy number in that high copy number transgenic lines expressed substantially greater amounts of ␥-sarcoglycan protein (Fig. 1D, lanes 6 and 9 for MCKgsg lines 2 and 4, respectively). A serial dilution of protein extract from transgenic muscle showed that MCKgsg lines 2 and 4 expressed a 150-and 200-fold increase in the amount of ␥-sarcoglycan protein, respectively, when compared with wild FIG. 1. Overexpression of ␥-sarcoglycan in transgenic mice. A, the construct for the MCKgsg transgene. A 1.35-kb MCK promoter was used to drive expression of the full-length murine ␥-sarcoglycan (gsg) sequence. Bovine growth hormone (BGH) termination and polyadenylation (poly(A)) sites were added at the 3Ј end. Restriction sites used to engineer the construct are also shown. B, Southern blot analysis of BamHI-and HindIII-double digested DNA from transgenic mice carrying low and high copy numbers of the MCKgsg transgene. Both transgene (arrow) and endogenous ␥-sarcoglycan gene (asterisk) are indicated. C, Northern blot analysis shows mRNA expression from the endogenous ␥-sarcoglycan locus (asterisk) and from the transgene (arrow). Lines 2 and 3 were used in this analysis where line 3 represents a low copy number transgenic line and line 2 represents a high copy number line. Expression is shown from quadriceps (Q), gastrocnemius (G), and heart (H). The total amount of RNA in each lane was determined to be comparable by ethidium bromide staining of the 28 S and 18 S ribosomal RNA (data not shown). D, immunoblot analysis of ␥-sarcoglycan overexpression in MCKgsg transgenic lines. Lines 2 and 4 are the high copy number lines, and lines 1 and 3 are the low copy number lines. Expression of ␥-sarcoglycan is the expected 35-kDa size. For lines 2 and 4, the extracts were diluted to compare relative amounts. hi, high copy; low, low copy; wt, wild type. type, whereas lines 1 and 3 expressed ϳ3and 5-fold more, respectively (Fig. 1D). Marked Overexpression of ␥-Sarcoglycan Produces Severe and Rapidly Progressive, Lethal Muscular Dystrophy-Of the four independent founder lines and their wild type littermate controls, the two high copy MCKgsg transgenic mice were found to be significantly smaller than their wild type littermates and the low copy number MCKgsg transgenic lines. A representative comparison of a high copy transgenic (MCKgsg line 4) and a normal littermate control mouse is shown in Fig. 2, top panel. Mice from MCKgsg lines 2 and 4 (high copy) were less active and displayed an abnormal gait that was characterized by widened hind limb spacing. We examined skeletal muscles from MCKgsg high copy mice, MCKgsg low copy mice, and wild type littermate controls. Hind limb from MCKgsg line 4 was visibly dystrophic with marked muscle wasting and gross fibrofatty replacement when compared with wild type mice (Fig. 2, top panel, right side). Muscle mass of individual quadriceps in MCKgsg lines 2 and 4 (0.07 and 0.05 g, respectively) was significantly lower when compared with two control littermates (0.16 and 0.14 g). Similar findings were observed in fore limbs of high copy MCKgsg lines 2 and 4 (data not shown). The marked muscle wasting was associated with premature death. Two independent high copy founders from MCKgsg line 2 and MCKgsg line 4 died at day 45 and day 110, respectively. Given the limited survival of the male founder for MCKgsg line 2, we attempted to propagate this line through in vitro fertilization but were unsuccessful. The female founder for MCKgsg line 4 reproduced once. The progeny from MCKgsg line 4 died prematurely and did not reproduce. Thus, neither high copy line was successfully maintained because of the lethal nature of the overexpression of ␥-sarcoglycan. We generated an intermediate high copy number line, MCKgsg line 5, with a copy number of ϳ29. These mice also displayed small size and abnormal gait similar to the MCKgsg lines 2 and 4. Propagation of MCKgsg line 5 was similarly limited. Histologic analysis of muscles from high copy transgenic mice (MCKgsg line 4) using hematoxylin/eosin and Masson trichrome staining showed severe dystrophic changes, including wide variation in fiber size, an inflammatory infiltrate, increased connective tissue, and adipocyte replacement of myofibers (Fig. 2, A and B). Abundant central nuclei were also evident in the dystrophic muscle ( Fig. 2A). Muscle from MCKgsg line 3, a low copy number line, appeared normal (Fig. 2, C and D) when compared with wild type control muscle (Fig. 2, E and F). The dystrophic phenotype of muscles from high copy transgenic mice was very similar to that seen in a ␥-sarcoglycan-null mutant (7) (Fig. 2, G and H). Overexpression Inhibits Normal Cellular Targeting of ␥-Sarcoglycan to the Plasma Membrane-␥-Sarcoglycan is an integral part of the membrane-associated sarcoglycan complex. Because of the similar histologic appearance of high copy MCKgsg transgenic muscle and ␥-sarcoglycan-null mutant muscle (7), we examined ␥-sarcoglycan localization in gastrocnemius muscle from normal mice and MCKgsg line 2. Despite a high level of ␥-sarcoglycan protein expression, ␥-sarcoglycan protein failed to target appropriately to the cell membrane (Fig. 3A). The majority of immunoreactive ␥-sarcoglycan protein was detected as punctate staining throughout the cytoplasm of the myofibers and was excluded from nuclei (arrowhead). Similar findings were observed in gastrocnemius muscle from MCKgsg line 4 (data not shown). Muscles from low copy MCKgsg transgenic mice were similar to wild type control muscle with normal ␥-sarcoglycan localization at the plasma membrane (Fig. 3, B and C). Although a small amount of cytoplasmic ␥-sarcoglycan protein was seen, this was not toxic to muscle given the normal histology (Figs. 2C and 3B). Lack of ␥-sarcoglycan expression at the cell membrane and increased intracellular ␥-sarcoglycan altered the expression of the remaining sarcoglycans in a manner different from loss of function mutations. The genetic loss of ␥-sarcoglycan is accompanied by reduced protein expression at the plasma membrane of the residual sarcoglycans despite their normal mRNA levels (4, 7). Because high level ␥-sarcoglycan expression similarly results in loss of ␥-sarcoglycan at the cell membrane, we expected that reduced levels of the other sarcoglycan proteins ␥-Sarcoglycan Overexpression in Muscle may be present. Instead, the increased cytoplasmic expression of ␥-sarcoglycan from MCKgsg line 2 (high copy) resulted in up-regulation of ␣and ␤-sarcoglycans (Fig. 4, A and C) when compared with wild type (Fig. 4, B and D). Of note, some ␤-sarcoglycan was found in a punctate pattern in the cytoplasm (Fig. 4C) suggesting that it may be aggregated with ␥-sarcoglycan. Normal levels of ␦-sarcoglycan and dystrophin were present in MCKgsg line 2 mice (Fig. 4, E and G). These data suggest that increased cytoplasmic ␥-sarcoglycan may be toxic to myofibers and sufficient to cause muscular dystrophy. Mislocalization of the remaining sarcoglycans is not required for the development of muscular dystrophy, but abnormal assembly of the sarcoglycans may contribute to the development of muscle degeneration. Immunoblot analysis confirmed the up-regulation of ␣and ␤-sarcoglycan. ␣and ␤-Sarcoglycans were significantly upregulated in muscle from the MCKgsg high copy lines (lines 2 and 4) compared with MCKgsg line 1, a low copy number line and wild type littermate muscle (Fig. 4I). ␦-Sarcoglycan and dystrophin levels were not significantly changed by overexpression of ␥-sarcoglycan at any level. Surprisingly, myosin expression appeared reduced in the high copy MCKgsg transgenic muscles suggesting that cytoplasmic ␥-sarcoglycan expression may disrupt sarcomeric function. At the light microscopic level, sarcomeres did not appear disrupted in MCKgsg line 2 (data not shown). To determine whether up-regulation of ␣and ␤-sarcoglycans is accompanied by an increase in ␣or ␤-sarcoglycan mRNA, we compared the mRNA expression in muscle from MCKgsg high copy transgenic and wild type mice. ␣-Sarcoglycan mRNA in MCKgsg line 4 was similar to that in wild type (Fig. 4J, top panel), suggesting that up-regulation of ␣-sarcoglycan is related to an inability of myocytes to regulate stoichiometric ratios at the sarcolemma. ␤-Sarcoglycan mRNA exists as three isoforms (4.4, 3.0, and 1.4 kb) that are thought to encode the same protein. We found that the 3.0-and 1.4-kb fragments were increased, whereas the 4.4-kb fragment was decreased (Fig. 4J, middle panel). It is possible that these alternative splicing forms may regulate the overall mRNA and consequently ␤-sarcoglycan protein expression. Thus, multiple factors might be involved in the mechanism underlying the up-regulation of ␤-sarcoglycan protein. Fast Fibers Are Preferentially Affected-Muscle fibers respond to damage differently depending on their fiber types. It was shown that fast muscle fibers are preferentially affected in DMD (21). We examined how fiber types respond differently in muscular dystrophy mediated by ␥-sarcoglycan overexpression under the control of the MCK promoter. We examined limb muscles from MCKgsg high copy mice and compared soleus (predominantly slow) to gastrocnemius (predominantly fast) muscle for ATPase activity. Fig. 5 shows both soleus (lower right portion of each muscle section) and gastrocnemius (upper left portion). Compared with normal muscles (Fig. 5A), muscles from MCKgsg line 2 (Fig. 5B) showed pervasive degeneration in areas, such as the gastrocnemius, where fast fibers are concentrated. Only the fast fibers in the soleus of MCKgsg line 2 showed overt degeneration. In MCKgsg line 2, slow fibers, seen abundantly in the soleus, showed less damage. Soleus also showed an increased percentage of slow fibers, consistent with their decreased susceptibility to muscular dystrophy and decreased level of the MCK promoter-directed ␥-sarcoglycan overexpression. It was previously shown that the MCK promoter directed dystrophin protein expression at a level 3-4 times higher in fast fibers than in slow fibers (22). It is possible that reduced expression of MCKgsg may occur, accounting for some of the protection of slow skeletal fibers. DISCUSSION Disruption of the DGC is responsible for a number of genetically distinct forms of muscular dystrophy. This includes mutations in the dystrophin gene that cause DMD and mutations in the sarcoglycan genes that cause the limb girdle muscular dystrophies (2,23). Many of these mutations are thought to produce disruption of the DGC by loss of function or loss of protein expression. Because mice null for sarcoglycan genes have been generated (6, 7) and these mice fully recapitulate the membrane instability and other features of human muscular dystrophy, we generated mice expressing ␥-sarcoglycan to study replacement approaches for the sarcoglycan gene products. For these transgenic studies, we used the MCK promoter to drive high level, striated muscle-specific expression. We selected the MCK promoter because it has been used extensively in transgenic gene replacement studies of dystrophin (11). We found a profound, lethal muscle wasting disorder developed in transgenic mice that had high level overexpression of murine ␥-sarcoglycan. These mice typically died within 2-3 months of age, were markedly reduced in size, and displayed limited mobility. In the high level ␥-sarcoglycan-overexpressing mice, we noted that fast fibers were preferentially affected. This may be consistent with fiber type specificity of the MCK promoter (22) or may be related to observations made in DMD muscle where fast fibers are more sensitive to damage (21). Lower levels of ␥-sarcoglycan expression from the same MCK promoter were not associated with muscle damage. The lower levels of expression seen in the hearts of these transgenic animals resulted in near normal ␥-sarcoglycan expression and ␥-Sarcoglycan Overexpression in Muscle no evidence of toxicity or damage (data not shown) and suggests that lethality associated with ␥-sarcoglycan overexpression is related to skeletal muscle toxicity. Because cardiomyopathy typically develops later in mice with sarcoglycan gene mutations (6,7), the premature death in MCKgsg high copy transgenic mice may have limited our ability to detect cardiomyopathy. The high level of ␥-sarcoglycan expression resulting from the MCKgsg transgenes produced several molecular consequences. Overexpression of ␥-sarcoglycan resulted in reduced ␥-sarcoglycan at the plasma membrane of myofibers. Loss of ␥-sarcoglycan expression at the membrane is found in DMD and sarcoglycan-mediated muscular dystrophies. Indeed, a human muscular dystrophy patient carrying the common ⌬521-T founder mutation was described who maintained expression of ␣-, ␤-, and ␦-sarcoglycan and preferentially displayed reduced ␥-sarcoglycan (24). Thus, the loss of ␥-sarcoglycan at the plasma membrane of myofibers is likely a contributor to the pathogenesis seen in this overexpression model. A second consequence of ␥-sarcoglycan overexpression was the accumulation of ␥-sarcoglycan immunoreactive aggregates in the cytoplasm of myofibers. Because the sarcoglycan complex is associated with both mechanical and signaling functions (25), mislocalization of ␥-sarcoglycan in the cytoplasm could lead to abnormal signaling or alteration of cytoskeletal elements that interact with the ␥-sarcoglycan. In its cytoplasmic domain, ␥-sarcoglycan interacts with a muscle-specific form of filamin (␥-filamin or filamin 2), and loss of ␥-sarcoglycan produces an increased plasma membrane ␥-filamin level (26). Once antibodies that recognize murine ␥-filamin are available, it will be interesting to determine whether filamin localization is altered in ␥-sarcoglycan-overexpressing mice. A third consequence of marked ␥-sarcoglycan expression is the increased expression of ␣and ␤-sarcoglycan. Up-regulation of ␣-sarcoglycan is possi-bly due to an inability to regulate protein assembly at the sarcolemma. However, regulation at both transcription level as well as protein level may be responsible for the up-regulation of ␤-sarcoglycan. The increase in these proteins likely produces abnormal sarcoglycan complexes that, in turn, may produce muscular dystrophy. The aberrant ratio of sarcoglycans may cause abnormal mechanical and signaling functions that may be pathologic to the muscle. Despite the similar histology between ␥-sarcoglycan overexpression-mediated muscular dystrophy and muscular dystrophy caused by dystrophin-or ␥-sarcoglycan-null mutation, disparities exist. In muscle degeneration where sarcoglycan is absent, instability of the muscle membrane is demonstrated by increased permeability. Normal muscle is impermeable to Evans blue dye, a small molecular weight vital tracer. Mutations in dystrophin or sarcoglycan lead to abnormal membrane permeability, and Evans blue dye uptake is a significant feature of the membrane defect in mdx and ␥-sarcoglycan mutants (7,27,28). We were able to assess Evans blue dye uptake in MCKgsg line 5 (copy number 29) and found no evidence for Evans blue dye uptake (data not shown). The expression and membrane localization of the remaining sarcoglycan subunits in ␥-sarcoglycan-overexpressing muscle likely has a protective effect on membrane permeability that is absent in ␥-sarcoglycan-null muscle (7). Thus, although overexpression of ␥-sarcoglycan leads to muscular dystrophy, the mechanism by which this muscular dystrophy develops likely differs from ␥-sarcoglycan deficiency. Transgenic rescue of dystrophin deficiency in the mdx mouse has been used to demonstrate that dystrophin replacement is effective (11,22). This approach has been highly illustrative to delineate structure-function relationships within dystrophin and its interaction with the remainder of the DGC (11). Furthermore, overexpression of utrophin, the highly related FIG. 4. Expression of DGC proteins in ␥-sarcoglycan-overexpressing mice. Quadriceps muscle from MCKgsg line 4 (A, C, E, and G) and normal controls (B, D, F, and H) were stained for components of the DGC. Shown in A and B is staining for ␣-sarcoglycan. C and D represent ␤-sarcoglycan staining. ␦-Sarcoglycan (E and F) and dystrophin (G and H) are also shown. I, Western blot analysis of ␣-, ␤-, and ␦-sarcoglycans (␣-sg, ␤-sg, and ␦-sg), dystrophin (dyst), and myosin in MCKgsg high copy lines (lines 2 and 4), a low copy line (line1), and a wild type control (wt). Coomassie blue (CB) staining is shown as a loading control. J, Northern blot analysis of ␣and ␤-sarcoglycan in MCKgsg high copy line (line 4) and wild type control. Ribosomal RNA is shown as a loading control. dystrophin homolog, also can fully rescue the dystrophic phenotype in the mdx mouse (29). In these studies, the same MCK promoter was used but did not result in myofiber toxicity as seen here for ␥-sarcoglycan overexpression. This suggests that the skeletal myocyte has different mechanisms for the maintenance of stoichiometric ratios of the different components of the DGC. These studies highlight the complexity of simple gene replacement strategies and underscore the importance of animal models for the treatment of human disease. Because viral replacement strategies typically lead to high level and often variable levels of expression throughout the transduced tissue, it is likely that some toxicity is occurring in those myofibers that have been transduced at high levels. Moreover, previous viral replacement studies, although critically important in demonstrating that replacement can be effective, have not explored fully the dose response of treatment that includes studies of the minimally effective dose and the potential for toxicity related to overexpression. Because viral replacement gene therapy trials are being initiated in humans (1), thorough testing in animal models is required to document the full range of benefits and the potential risks and toxicity.
5,600
2001-06-15T00:00:00.000
[ "Biology", "Medicine" ]
GRASS. II. Simulations of Potential Granulation Noise Mitigation Methods We present an updated version of the GRanulation And Spectrum Simulator (GRASS) which now uses an expanded library of 22 solar lines to empirically model time-resolved spectral variations arising from solar granulation. We show that our synthesis model accurately reproduces disk-integrated solar line profiles and bisectors, and we quantify the intrinsic granulation-driven radial-velocity (RV) variability for each of the 22 lines studied. We show that summary statistics of bisector shape (e.g., bisector inverse slope) are strongly correlated with the measured anomalous, variability-driven RV at high pixel signal-to-noise ratio SNR and spectral resolution. Further, the strength of the correlations varies both line by line and with the summary statistic used. These correlations disappear for individual lines at the typical spectral resolutions and SNRs achieved by current extremely precise radial velocity spectrographs; so we use simulations from GRASS to demonstrate that they can, in principle, be recovered by selectively binning lines that are similarly affected by granulation. In the best-case scenario (high SNR and large number of binned lines), we find that a ≲30% reduction in the granulation-induced root mean square RV can be achieved, but that the achievable reduction in variability is most strongly limited by the spectral resolution of the observing instrument. Based on our simulations, we predict that existing ultra-high-resolution spectrographs, namely, ESPRESSO and PEPSI, should be able to resolve convective variability in other, bright stars. INTRODUCTION Intrinsic stellar variability is one of the chief obstacles limiting the detection of small, rocky planets in the habitable zones of Sun-like stars (i.e., "Earth twins") with the radial-velocity (RV) method (Crass et al. 2021).Various strategies have been devised for coping with different sources and manifestations of this intrinsic variability.For stellar pulsations (p-modes), Chaplin et al. (2019) devised theoretically-motivated observing strategies designed to mitigate residual p-mode amplitudes to the sub-10 cm s −1 level; these strategies have since been widely implemented in RV surveys (e.g., Blackman et al. 2020, Gupta et al. 2021) and explicitly validated by Gupta et al. (2022).For magnetic activity on rotation timescales, various works have employed Gaussian Processes (GPs; e.g., Haywood et al. 2014;Rajpaul et al. 2015Rajpaul et al. , 2016;;Gilbertson et al. 2020;and references therein), statistically-motivated methods (e.g., Collier Cameron et al. 2019;Zhao & Tinney 2020), and neural networks (e.g., de Beurs et al. 2022;Liang et al. 2024) to disentangle the velocity contributions of spots and faculae from those of planets with the aid of various activity indicators.Additionally, models such as SOAP-GPU (Zhao & Dumusque 2023) have been developed to numerically forward model activity-driven variability.However, aside from integrating over several hours, astronomers lack an effective strategy for mitigating the noise introduced by granules that has been definitively demonstrated and implemented in extant extremely precise radial velocity (EPRV) surveys (Cegla et al. 2019;Crass et al. 2021).Indeed, two recent works have shown that supergranulation, the manifestation of convection on larger spatial and longer temporal scales than granulation (see reviews such as Rieutord & Rincon 2010 and Cegla 2019), is the dominant cause of RV variability during solar minimum (Lakeland et al. 2024) and that (super)granulation will preclude the blind detection of Earth-twin exoplanets even if variability from magnetic activity can be perfectly mitigated (Meunier et al. 2023). As RV spectrographs have begun to achieve instrumental precisions and stabilities at and below the ∼1 m s −1 level, granulation has become a greater cause for concern in the EPRV community.Recognizing that stellar oscillations and granulation would constitute large sources of RV noise problematic for discovery and characterization of Earth-mass planets with the HARPS spectrograph, Dumusque et al. (2011) evaluated various observations strategies in order to determine which ones optimally averaged out noise from these processes.Using simulated RV time series generated from observationally-derived stellar velocity power spectra, Dumusque et al. (2011) determined that averaging multiple observations per night spaced apart by hours more effectively mitigates granulation noise than a single long or multiple consecutive exposures.They conclude, quite optimistically, that "granulation phenomena and oscillation modes will not prevent us from finding Earthlike planets in habitable regions." Following the Dumusque et al. (2011) study, Meunier et al. (2015) took a differing but complementary approach for assessing the impact of (super)granulation on the measurement of precise RVs.Tiling a stellar hemisphere with simulated granules of varying sizes, intensities, and velocities given relations for these quantities derived from empirical laws and studies of hydrodynamical (HD) simulations (see §2.2 of Meunier et al. 2015 and references therein), they allowed the granules to probabilistically evolve in time.Over a simulated 12-year time span, they note a root mean square (RMS) RV of ∼0.8 m s −1 and a photometric RMS of 67 ppm.Simulating observation strategies for mitigating this variability, Meunier et al. (2015) show that the RMS from granulation cannot be sufficiently reduced for averaging timescales commensurate with the granule lifetime (∼10-15 minutes in the case of the Sun).After 30 minutes of averaging, they report an RMS RV of about ∼50 cm s −1 , which they state is in conflict with the claim made by Dumusque et al. (2011) that granulation (but not supergranulation) can be largely averaged out over these timescales.Nevertheless, they do report that performing multiple measurements over the course of a night is more successful in reducing the granulation RMS than backto-back observations, but not to the degree reported by Dumusque et al. (2011). The results presented in both Dumusque et al. (2011) and Meunier et al. (2015) consistently suggest that spreading observations out over a night (or over several nights in the case of supergranulation; see §4 of Meunier et al. 2015) more effectively mitigates the impact of granulation than consecutive exposures.However, they disagree somewhat about the absolute effectiveness of their best-case scenario observation strategies, with Meunier et al. (2015) painting a notably more pessimistic picture.Considering more recent results, including Meunier et al. (2023) and Lakeland et al. (2024), it does appear that Dumusque et al. (2011) underestimated the amplitude and impact of granulation on precise RV measurements. It is important to note, though, that the methods used for simulating and interpreting stellar velocities in these works differ fundamentally from how velocities are measured in reality.Whereas Dumusque et al. (2011) and Meunier et al. (2015) directly simulate disk-integrated stellar velocities, in practice RVs are inferred from the observed Doppler shifting of absorption lines in stellar spectra.These measured RVs will differ from the somewhat fictitious RVs studied by Dumusque et al. (2011) and Meunier et al. (2015) because lines form over a range of heights in stellar atmospheres, tracing different portions of the full 3D atmospheric velocity field with varying sensitivities as a function of height. To address this complicating reality, a series of papers (Cegla et al. 2013(Cegla et al. , 2018(Cegla et al. , 2019) ) used 3D magnetohydrodynamic (MHD) simulations coupled with detailed radiative transfer modeling to faithfully reproduce stellar magnetoconvection and absorption profiles for the Fe I 6302 Å line (see also Cegla 2019 for a detailed review).Through this series of papers, the authors show that perturbations in the shapes of their model diskintegrated line profiles encode information about spurious Doppler shifts from granulation.More specifically, they find strong correlations between the apparent RV of the line studied and various summary statistics designed to describe the bisector curve, including, e.g., bisector inverse slope (BIS, Queloz et al. 2001). In principle, such correlations are promising: precise measurements of individual bisector asymmetries predict the anomalous (i.e., granulation-induced) velocity shift, and could consequently be used to mitigate velocities from granulation.However, the authors of Cegla et al. (2019) consider only one absorption line in their work and consequently caution that the optimal bisector summary statistic (or the optimal regions of the bisector for measuring such summary statistics) will likely vary among lines.They further warn that the typical spectral resolutions and line spread function (LSF) sampling (i.e., pixelization) of extant spectrographs, together with the effects of photon noise (see Povich et al. 2001), will preclude measuring sufficiently precise bisectors for such a mitigation technique to be viable. In order to address the limitations of and complement previous studies of granulation in the context of EPRVs, Palumbo et al. (2022) presented the GRanulation And Spectrum Simulator (GRASS), an open-source computation tool which uses time-and disk-resolved solar spectra first observed for Löhner-Böttcher et al. (2018, 2019) and Stief et al. (2019) to empirically model changes in line shapes created by granulation.Validating this tool, Palumbo et al. (2022) showed that GRASS reproduces the time-averaged line profile and bisector of the Fe I 5434 Å line observed in the disk-integrated Sun by Reiners et al. (2016b), as well as the degree of variability expected from granulation. In this work, we present a large update to the GRASS software (v2.0), and use it to explore potential avenues for mitigating granulation noise in EPRV measurements.In §2, we describe the solar observations which are used to synthesize spectra ( §2.1) and provide an overview of the procedure used by GRASS to construct disk-integrated line profiles from the input solar observations ( §2.2).In §3, we validate our model for line synthesis by comparing our synthetic disk-integrated line profiles to the time-averaged profiles observed in the IAG Solar Atlas (Reiners et al. 2016b).In §4, we characterize the line-by-line variability in each of the solar lines studied in this work, and simulate avenues for recovering the line-shape vs. anomalous RV correlation with limitations imposed by finite spectral resolution and SNR.In §5, we interpret and discuss our results in the context of previous studies concerned with the impact of granulation on the measurement of precise RVs, comment on future avenues for demonstrating the mitigation techniques considered in this work, and highlight various limitations of GRASS and the scope of this study.Finally in §6, we briefly summarize our findings and em-phasize that granulation, strictly speaking, encodes coherent signals, rather than simple noise, in the shape of stellar lines.GRASS uses disk-and time-resolved observations of solar lines to synthesize time series model stellar (i.e., diskintegrated) spectra with variability from granulation.GRASS and supporting documentation are publicly available on GitHub1 .Tagged version releases corresponding to Palumbo et al. (2022) and this work are archived on both GitHub and Zenodo (Palumbo et al. 2023a).In this section, we briefly describe the solar observations used as input to GRASS ( §2.1), provide an overview of the procedure used by GRASS to synthesize disk-integrated spectra ( §2.2), and detail our process for measuring velocities from lines ( §2.3).Additional, software-specific implementation details for GRASS v2.0 (including the input data pre-processing, stellar grid tiling procedure, and the GPU implementation) are discussed in detail in Appendix A. Summary of Simulation Input Data In Palumbo et al. (2022), we used disk-and timeresolved observations of the Fe I 5434.5 Å line from Löhner-Böttcher et al. (2019) to produce synthetic timeresolved, disk-integrated line profiles.In this work, we use additional spectra originally presented in Löhner-Böttcher et al. (2018), Stief et al. (2019), andLöhner-Böttcher et al. (2019) to synthesize other disk-integrated lines.The observed spectral regions include 22 solar lines which are sufficiently unblended and deep to use as models for spectral variability from granulation.Example disk-center spectra from Löhner-Böttcher et al. (2018), Stief et al. (2019), andLöhner-Böttcher et al. (2019) are shown in Figure 1 with annotations for the lines used in this work.A tabulation of the atomic parameters for these lines is provided in Table 1.We below provide a brief summary of the processing procedure for these data; a more comprehensive description is provided in Appendix A.1. LARS, the instrument that provided input spectra for this work, is described in detail in Löhner-Böttcher et al. (2017).The observing scheme and reduction procedure used to obtain these spectra is summarized in Palumbo et al. (2022) and explained in greater detail in Löhner-Böttcher et al. (2018), Stief et al. (2019), andLöhner-Böttcher et al. (2019).In brief, spectra were observed 2019), but are excluded from the analyses presented in this work, as explained in §2.Other lines are of telluric origin, significantly blended, and/or very shallow.Note the breaks in the wavelength axis; eight spectral regions spanning only a few Å each were observed.Spectroscopic properties and parameters for these lines are tabulated in Table 1. at 41 locations along the North-South and East-West axes of the Sun; only quiet-Sun regions were observed. The spectra have R ∼ 700,000, and are calibrated on an absolute wavelength scale with accuracy ∼0.02 m Å (or about 1 m s −1 ).On short timescales (e.g., over the duration of the ≳20-minute observations baselines), Löhner-Böttcher et al. (2017) report that the instrument instability is the largest source of uncertainty in the positions of lines; generally this error is at the level of a few cm s −1 , but can be larger in some cases.In this work, we quantify the impact of this uncertainty by repeatedly synthesizing many realizations of our synthetic spectra, from which we measure sample means and standard deviations for the RV variability of lines.This process is described further in §4.1. As in Palumbo et al. (2022), we measure line bisectors and widths as a function of depth into the line for all observed lines for each 15-second bin, which together we refer to as the "input data," following the nomenclature established in Palumbo et al. (2022).These input data losslessly encode the temporal variability in the shape of each line and are used as the basis of our stellar surface simulation and integration described in the following section, §2.2.Our process for measuring these input data from the solar spectra, which has changed slightly from Palumbo et al. (2022), is described in detail in §A.1.This pre-processing of the solar observations was performed once, and the resulting data products are downloaded from Zenodo (Palumbo et al. 2023b) by GRASS upon installation. Overview of GRASS Synthesis Procedure Owing to the expanded library of solar templates lines available to GRASS, we have slightly updated and optimized the procedure it uses to create synthetic, diskintegrated spectra from these input data.Here, we describe the synthesis procedure used by GRASS, highlighting changes and updates to the synthesis procedure detailed in §3.2.1 of Palumbo et al. (2022). Prior to the spectral synthesis step, GRASS first tiles a model stellar grid and computes the requisite weights and rotational velocities (see §A.2).These quantities are computed once and stored in memory, since they are assumed to be unchanging in time.Following the initial tiling and geometric computations, GRASS uses the input data to reconstruct disk-resolved line profiles in each tile on the model stellar surface for each time step of the simulation.As in Palumbo et al. (2022), GRASS selects the input data with the closest µ value and matching directional axis for each stellar surface tile.In the event that a certain axis and µ position lack data for a given template line, the next-nearest input data are used. The temporal variability in each disk-resolved line profile is directly encoded from the time-series input observations.However, as described in §3.2.3 of Palumbo et al. (2022), we do apply a random phase offset to the input data in each stellar surface tile.This random (2019).All other quantities are from the NIST Atomic Spectra Database (Kramida et al. 2020).Rows marked with a double dagger ( ‡ ) are excluded from the analyses presented in this work, as explained in §2.The number of significant figures reported matches those given in the NIST Atomic Spectra Database or Löhner-Böttcher et al. 2019 (for daggered quantities).phase offset ensures that the disk-resolved line profiles in adjacent tiles do not unrealistically move in concert.Because some template lines were observed contemporaneously (e.g., Fe I 5432 Å and Fe I 5434 Å are in the same observed spectral region, and so were always observed simultaneously; see Figure 1), the input data for these lines is, by default, kept in-phase within each stellar surface tile.The applied phase offsets also lead to the suppression of p-modes in the final disk-integrated spectrum, since the oscillations in the input data are added incoherently (see §3.2.4 of Palumbo et al. 2022). Similarly to GRASS v1.0 (Palumbo et al. 2022), the disk-integrated flux as a function of time and wavelength is then calculated as the (weighted) sum over the individual tile intensities, where (as noted in Appenidx A.2) tile weights are given by the product of the limb-darkened continuum intensity and the projected tile area (analogous to Equation 18.3 of Gray 2008).As in Palumbo et al. (2022), these summations are performed in place to avoid excess memory allocation (i.e., the disk-resolved spectra computed for each tile are not retained), which would become quite unwieldy for larger spectra. Measurement of Velocities from Spectra As in Palumbo et al. (2022), we compute CCFs in order to measure velocities from individual lines and spectra.In implementation, EchelleCCFs (Ford et al. 2021) was used to cross correlate the considered line profile(s) with a Gaussian template mask centered at the known rest-frame wavelength of said line and width (in units of velocity) given by the speed of light divided by the spectral resolution.The mask was projected onto the line profile in steps of 100 m s −1 .The extremum of the resulting cross-correlation function (CCF) was fit with a Gaussian function via non-linear least squares, and the RV was taken as the mean of the best-fit Gaussian. Our velocity-measurement procedure was tested by injecting known Doppler shifts into synthetic spectra; we find that the recovered velocities are accurate, with er-rors at the ≲1 cm s −1 level (compare to the expected amplitude of granulation noise at a few to several tens of cm s −1 ).We also tested other combinations of mask shapes (tophat vs. Gaussian) and functional forms for fitting the CCF peak (quadratic vs. Gaussian) and found that the chosen combination of Gaussian mask with Gaussian fit performed the best. GRASS MODEL VALIDATION In Palumbo et al. (2022), the line synthesis procedure used by GRASS was validated by comparison of timeaveraged line profiles to the IAG Solar Atlas (Reiners et al. 2016b).In the following subsections, we reperform this validation for each template line used in this work ( §3.1), and also discuss how effectively these template lines are able to model the shapes and behaviors of other lines ( §3.2). Disk-Integrated Line Profiles and Bisectors In Palumbo et al. (2022), we verified that GRASS could accurately reproduce disk-integrated line profiles and bisectors using disk-resolved spectra as input.Here, we validate the synthesis for the other lines presented in this work.Similarly to Palumbo et al. (2022), we compare time-averaged synthetic line profiles from GRASS to line profiles observed in the IAG Solar Atlas (Reiners et al. 2016b) in order to semi-quantitatively assess our synthesis accuracy. To perform this comparison, we degrade the spectral resolution of the IAG Solar Atlas to the nominal LARS resolution of R ∼ 700,000 via convolution with a Gaussian LSF given by: where σ(λ) is given by and λ is the wavelength sampled by a given pixel.To compute residuals, we then perform a flux-conserving interpolation onto a common grid of wavelengths with ∼4 pixels per LSF FWHM (following the algorithm of Carnall 2017).Rather than computing bisectors directly from the line profiles, we instead measure bisectors from individual-line CCFs.These CCF bisectors are smoother (and consequently easier to compare) than bisectors measured directly from line profiles.The resulting line profiles and bisectors are discussed individually in Appendix B. An example set of comparisons is shown in Figure 10; the full set of comparison figures is available from the online journal.The full ensemble of reproduced bisectors are shown in Figure 2 and discussed further in §3.2. In general, we find that we are able to synthesize line profiles that are faithful to those seen in the IAG Atlas with error within about 1% flux.Deviations are usually caused by blends in the wings of the lines of interest, which are modeled out in the pre-processing stage for the LARS data (see §A.1).Likewise, bisectors are most accurate where the first derivative of the line profile is largest, but tend to deviate in the top 20% or so of line, owing to blends and/or our imperfect modeling of the line wings in the preprocessing stage, as well as at the very bottom of the line, where interpolation noise and measurement error become large.Below ∼80% of the continuum, the velocity errors in the line bisectors are generally within a couple of m s −1 and within 1 m s −1 in the best cases.For lines that have blends in their wings, the velocity errors are larger toward the continuum (above 10 m s −1 in some cases).As noted in Palumbo et al. (2022), deviations in the cores of lines (particularly the deeper lines) could arise from chromospheric activity captured during the observation of the IAG Atlas during a period of heightened solar activity in 2014 (Hathaway 2015;Reiners et al. 2016b).Specific comments for each line are given in Appendix B, as well as Section 3 and Appendix A of Löhner-Böttcher et al. (2019). Use of Template Lines as Generalized Line Models Only a limited number of lines were observed by Löhner-Böttcher et al. (2018, 2019).Since thousands of absorption lines are generally used to measure velocities from spectra, v1.0 of GRASS (Palumbo et al. 2022) modeled lines of differing depths by truncating the deep Fe I 5434 Å line bisector (see §3.2.2 of Palumbo et al. 2022).This approach was motivated by the heuristic presented in Gray (2008), wherein it is noted that the bisectors of shallower lines tend to reflect the shape of the upper portions of bisectors of deeper lines (see their Figure 17.14).For comparison, we have overplotted the synthetic-disk integrated bisectors produced in this work in Figure 2. It is readily apparent that the four blend bisectors at right in Figure 2 cannot be superimposed as readily as the classical "C"-shaped bisectors at left, with the potential exception of Ni I 5435 Å and Ca I 6169.5 Å.Further, even in the left-hand plot of Figure 2, there is notable variance between the bisectors, especially in the case of the three deepest lines: Fe I 5576 Å, Fe I 5250.6 Å, and Fe I 5434 Å.This imperfect correspondence is not entirely surprising, given that the limited number of lines considered were not cherry-picked for the similar- ities in their bisectors (as is the case in Figure 17.14 of Gray 2008).Rather, the lines and spectral regions observed for Löhner-Böttcher et al. ( 2019) and then used in this work were chosen based on their past usage in the heliophysics literature. Although the picture painted by Figure 17.14 of Gray (2008) is certainly convenient, it is clear from Figure 2 that the correspondence of bisectors values at given intensities/depths is not universal.This reality is not entirely surprising.Lines do not form at a single representative "formation height", but rather over a run of heights that differs in accordance with the properties of the stellar atmosphere and the atomic/ionic properties of a line-forming species.Consequently, a given continuum-normalized intensity cannot be neatly mapped to a singular physical height in the solar atmosphere across differing lines.For the results presented in this study, we only use the 22 template lines to reconstruct their own disk-integrated profiles.Future works could explore which template lines used herein are bestsuited to model variability in other lines not observed by LARS, but such a study is beyond the scope of the analysis presented in this work (see also the discussion in §5.1 and §5.2). SIMULATIONS OF GRANULATION MITIGATION As is visually apparent from Figure 2, granulation affects lines differentially.Consequently, measured RV variability will differ line by line, and the amount of granulation noise measured in RV observations will differ depending on the set of lines used.In the following subsections, we examine and quantify this line-by-line variability for the set of solar lines modeled in this study ( §4.1), explore correlations between line-shape summary statistics and anomalous RVs ( §4.2), and simulate how these correlations might be used for mitigation given the constraints of existing instrumentation ( §4.3). Line-by-line Variability It has been widely shown that the measured extent of variability manifests differentially between lines and sets of lines.In the case of individual lines, Elsworth et al. (1994) and Pallé et al. (1999) report different RMS RVs for the K I 7699 Å line and the Na I D 1 and D 2 lines, respectively.Considering sets of lines, Al Moulla et al. (2022Moulla et al. ( , 2024) ) showed that the measured RMS RV varies among sets of solar lines binned by formation temperature.As suggested by these studies, simply measuring RVs from a minimally variable set of lines may constitute one potentially viable, albeit limited, method for granulation mitigation.To quantify the different levels of variability in lines, we synthesized many diskintegrated time series for each of the 22 lines considered in this work, from which we measured the RMS of the individual line RVs. The procedure for determining these individual line RMS values is as follows: GRASS was used to generate 40-minute disk-integrated time series with a time resolution of 15 seconds for each considered line profile (see Appendix A.1 for details on the temporal cadence and baseline of the input data); velocities were measured from these line profiles as described in §2.3 (see also §3.3 of Palumbo et al. 2022); lastly, an RMS was measured for the resulting RV time series.This process was repeated many times with different realizations of the Although there is an appreciable difference between the most and least variable lines, it is not immediately apparent how the variability of a line could be predicted from the formation or atomic properties of a line. synthetic line profiles to yield a robust mean and standard deviation for each RMS RV. The resulting RMS values are shown in descending order in the left-hand panel of Figure 3 and are reported in Table 2.There is a notable difference in the RMS between the most and least variable lines.At the high end, the Fe I 5379 Å line exhibits a ∼75 cm s −1 RMS.At the low end, Mn I 5432 Å has a ∼45 cm s −1 RMS.Given this nearly ∼30 cm s −1 difference between the most and least variable lines, a fruitful granulation mitigation strategy might consistent of identifying the least variable lines and measuring RVs from only these lines, rather than from a larger line list also including highly variable lines.To enable such an approach, one would need to know the intrinsic variability of each line for a given star.If such a quantity can not be measured directly from spectra (owing to e.g., limitations from SNR, sampling, etc.), then predicting variability from a proxy variable may be the next-best approach.However, we note no significant trend connecting the RV variability of a line to its depth, wavelength, or the potential of the lower/upper state of its transition. Recently 2024).However, given the small number of lines considered, the significance of such a trend in our data is questionable at best.We discuss these results and prospects for studying line-by-line variability further in §5.1. Mitigation via Bisector Diagnostic Correlations Using disk-integrated Fe I 6302 Å profiles derived from MHD simulations, Cegla et al. (2019) showed notable correlations between various line-shape summary statistics and the anomalous (i.e., granulation-induced) RV measured.An example of such a correlation between the bisector inverse slope (BIS) and velocity for synthetic Fe I 5434 Å profiles produced by GRASS are shown in Figure 4.Note that the amplitude of the bisector variations (shown in the left-hand panel of Figure 4) were amplified by 50× in order to make these variations (which are at the sub-m s −1 scale) visible on the scale of the mean bisector (which spans multiple hundreds of m s −1 from the red-most to the blue-most point). Notably, the simulations performed by Cegla et al. ( 2019) considered only one line (the Fe I 6302 Å line) under an artificially strong vertical magnetic field, which they note inhibits convection (and therefore the ob-served variability of their line) relative to what is observed in the quiet Sun.To supplement the analysis presented Cegla et al. (2019), we here perform a similar bisector analysis for the 22 lines studied in this work. To perform this analysis, we first used GRASS to synthesize time series for each of the 22 template lines considered, and then measured various bisector-shape summary statistics and RVs for each time snapshot of each simulation.RVs were measured from CCFs as in §4.1, and bisector shape diagnostics were measured from the CCF bisectors.As in §3.1, we measured bisectors from CCFs, rather than directly from line profiles, in order to produce smoother curves.In later sections (namely §4.3), we will also measure bisector diagnostics for CCFs computed from many lines; by measuring CCF bisectors for individual lines, we can directly compare the performance of the bisector diagnostics on lines individually and in aggregate.Finally, mitigated RVs were measured by subtracting off the RVs predicted by a linear fit between the raw measured RVs and each bisector diagnostic.This procedure was repeated many times for each line using many different realizations of the synthetic profile time series to measure robust averages for the correlation coefficients and improved RMS RVs.These initial simulations and correlations were constructed for line profiles synthesized at R = 700,000 with SNR = ∞ (i.e., no additional noise, photon or otherwise, was simulated in the synthesized spectra).We acknowledge that these simulation parameters are certainly not realistic; this initial analysis was performed in order to determine the relative usefulness of each bisector shape diagnostic in a hypothetical best-case scenario where granulation is the sole source of RV noise.The effects of various limitations imposed by more realistic observing conditions (including spectral resolution and photon noise) are evaluated in §4.3. The considered bisector shape diagnostics include: bisector inverse slope (BIS), bisector curvature (C), and bisector span (or amplitude, a b ).Definitions for these diagnostics are given below; see also Figure 6 of Cegla et al. (2019) for an excellent illustrative demonstration of these shape diagnostics.The bisector inverse slope (BIS, Queloz et al. 2001) is given by: where v t is the average velocity of the bisector between 10% and 40% of the line depth, and v b is the average velocity between 55% and 90%.The bisector curvature (C, Povich et al. 2001) is given by: where v 3 , v 2 , and v 1 regions are bounded at 20%-30%, 40%-55%, and 75%-95% of the line depth, respectively.Lastly, the bisector amplitude (a b , Livingston et al. 1999) is given by: where v min is the blue-most velocity in the bisector curve and v 0 is the velocity in the line core (the bottom of the bisector).In practice, we calculate v 0 as the mean velocity in the bottom few points of the bisector, excluding the bottom-most measurement which is often highly erroneous to due to interpolation noise.We present and analyze the results of these simulations below in §4.2.1, where correlation coefficients and corrected RMS RVs were calculated using the default definitions of the bisector diagnostics given above.In §4.2.2 we follow (Cegla et al. 2019) and iteratively "tune" the definitions of these diagnostics to maximize the correlation for each line. Default Bisector Diagnostics The results of the granulation mitigation simulations described above are tabulated in Table 3.For each line and bisector diagnostic, the mean Pearson correlation coefficient (R), mean RMS RV, 1σ variation in RMS RV, and percent improvement (rounded to the nearest whole number) over the RMS RVs reported in Table 2 are shown.The reported improvements should be taken as approximate and representative; a 1% change in the RMS RV would amount to ≲1 cm s −1 at the ∼30-70 cm s −1 velocity scale of granulation. The best-performing diagnostics are able to remove, optimistically, ∼25-33% of the intrinsic RV variability measured for each line.However, the level of correction achieved by each diagnostic was inconsistent across the considered lines.E.g., although BIS performed quite well for Fe I 5434 Å, this diagnostic's performance varied widely for other lines.Likewise, the bisector amplitude performed well for a small handful of lines, but failed to achieve any meaningful improvement for several lines.Interestingly, the bisector curvature performed poorly across the board, with the notable exception of Fe I 5383 Å.This variance is not surprising: Cegla et al. (2019) inferred that the BIS performed well when the v t and v b regions from which this quantity is calculated bound the bend of the "C" and bottom of the bisector (corresponding to the line core), respectively, for their synthetic Fe I 6302 Å line.This picture is generally corroborated by our results: BIS generally performs best for lines with classical "C"-shaped bisectors whose bend and core happen to be captured by the canonical v t and v b regions of 10%-40% and 55%-90% depth.Indeed, we find that BIS performs well for our Fe I 6302 Å line simulations, removing ∼24% of the line's intrinsic variability. In the context of Cegla et al. (2019), the observed variance of the bisector amplitude is perhaps harder to explain.Yet, on closer inspection it is clear that the bisector amplitude tends to fail to predict the anomalous RV for lines whose bisectors deviate from the classical "C" shape.Two salient examples are Fe I 6170 Å (which has a "/"-shaped bisector owing to line blends; see the online journal figure set and Appendix B) and Fe I 5436.3Å (which has a bisector corresponding to only the top-most portion of the classical "C" shape; see the online journal figure set and Appendix B).In the latter case, the bisector amplitude hardly varies from 0, since the blue-most point and the core of the bisector are, in most snapshots in any given synthetic time series, one and the same. The generally poor performance of the bisector curvature C, might suggest that increasing the number of averaging regions (three, compared to two for BIS) does not necessarily improve sensitivity to changes in bisector asymmetry.In some part, though, the poor performance of C may also be happenstance: the canonical averaging regions may not be ideally placed to probe changes in bisector curvature for every line.In the following section, we explore this possibility by adjusting the averaging regions used in both BIS and C in order to maximize their diagnostic power for each line. Optimized Bisector Diagnostics As noted by Cegla et al. (2019), these bisector diagnostics (except for a b ) are not "tailored" to the properties of a given line: although the v t region happens to capture the kink in the "C"-shaped curve of the Fe I 5434 Å line (see Figure 4), it may not for another line.To address this fact Cegla et al. (2019) iteratively varied the averaging regions used in the definitions of BIS and the bisector curvature C in order to optimize their correlations with the measured RVs of their Fe I 6302 Å line profiles.We have performed this tuning for each of the 22 lines studied in this work. To tune the bisector diagnostics, we first drew values for the bounding intensities (which we refer to as b 1 , b 2 , b 3 , and b 4 for BIS; and c 1 , c 2 , c 3 , c 4 , c 5 , and c 6 for C) from uniform random distributions.As in Cegla et al. (2019), we required that each averaging region spanned at least 5% in depth, and we additionally restricted the bounding intensities to fall between 15% and 95% of the given line depth (in order to avoid the regions near the continuum and the core of line, where bisector measurement error is largest).Draws which violated these restrictions were rejected and re-drawn.For each accepted draw, we repeatedly calculated the Pearson R between the tuned BIS/tuned C and measured RV using many different realizations of each line time series.For a given draw, we retain the median Pearson R, and then choose the optimized bounding intensities as those with the highest median Pearson R. The optimized values and the corresponding improved RMS RVs are given in Table 4 and plotted as the colored diamonds with blue error bars in Figure 3. Generally, tuning BIS is successful: the correlation strengths are improved, and the anomalous RVs for each line are better mitigated than in the case of the canonical BIS definition.Optimistically, the tuned BIS removes ∼25-30% of the intrinsic RV variability, depending on the line.Like for BIS, the bisector curvature C was successfully tuned for most lines.In fact, the tuned C vastly outperformed the canonical C, which generally failed to produce any significant improvement over the intrinsic RMS with the exception of the Fe I 5383 Å line.Optimistically, the tuned C diagnostics remove ∼20-30% of the intrinsic RV variability.At best, however, the tuned C diagnostics match the performance of the tuned BIS.As previously mentioned in §4.2.1, this would suggest that increasing the number of averaging regions does not increase the performance of a diagnostic. It is notable, and quite visually apparent from Figure 3, that the lines with greater intrinsic variability see the largest absolute reduction in their RMS RV.Less variable lines see a reduction that is less signifi-cant.Interestingly, the full span in intrinsic (i.e., unmitigated) RMS RVs is about ∼30 cm s −1 (from ∼75 cm s −1 to ∼45 cm s −1 ), whereas the span in mitigated RMS RVs is only about ∼15 cm s −1 (from ∼50 cm s −1 to ∼35 cm s −1 ).This decreased span is interesting, and might suggest that simple linear correlations between bisector diagnostic and anomalous RV can only correct granulation velocities to some limit.This, and other possibilities, are discussed further in §5.2. The Mn I 5432 Å Line Compared to all other lines considered in our analysis of bisector diagnostics, the Mn I 5432 Å is an interesting outlier: the tuning of both BIS and C failed for this line.The standard BIS diagnostic produced a ∼5% improvement over the intrinsic RMS RV, compared to ∼7% for the tuned BIS.Likewise, the standard C produced a ∼6% improvement, compared to ∼7% for the tuned C. We do not consider these changes between the tuned and un-tuned diagnostics to be significant.On first inspection, it is not immediately clear why the tuning failed.Mn I 5432 Å is among the shallowest studied lines, which might suggest that the averaging regions are too small or too close to each other to meaningfully diagnose changes in bisector shape.Yet, both the BIS and tuned BIS performed well for other shallow lines (e.g., Fe I 5382 Å), suggesting that the modest depth of Mn I 5432 Å is not the implicating factor. It is also plausible that the failure of the bisector diagnostic tuning is related to the intrinsic variability of this line: Mn I 5432 is the least variable line examined in this work (Table 2 and Figure 3), and so it might follow that the shape of this line is minimally perturbed by granulation.Lending credence to this narrative, it has been previously recognized in the literature that the properties of the Mn I 5395 and 5432 Å lines are somewhat peculiar: these lines produce anomalous abundances (Booth et al. 1984;Scott et al. 2015) and are curiously sensitive to global solar activity, despite forming in the photosphere (Doyle et al. 2001;Livingston et al. 2007).Vitas et al. (2009) demonstrate that this peculiar behavior is a consequence of the hyperfine structure of the Mn atom (Murakawa 1955), which drives significant non-thermal broadening of the Mn I 5395 and 5432 Å lines.It is this broadening, claim Vitas et al. (2009), that makes these lines less susceptible to "smearing" by convective motions.The minimal variability we observe for the Mn I 5432 Å line suggests consistency with these previous works; the failure of the diagnostic tuning can then be understood as a simple reflection of the fact that there is minimal line-shape variability to optimize on.Table 4. Optimized regions for bisector diagnostics (BIS and C) and corresponding improvement in RMS RV.As in Table 3, the improvement is relative to the "intrinsic" RMS of each line and rounded to the nearest whole percent.The parameters b1 and b2 refer to the depths bounding the vt region; b3 and b4 bound the v b region (see Equation 3). Similarly for C: c1 and c2 bound v1, and so on (see Equation 4). Line Tuned BIS Granulation Signal Aggregation In practice, it is not feasible to measure precise bisector shapes (or RVs) for individual spectral lines for stars other than the Sun, owing to limitations imposed principally by photon noise, spectral resolution, and detector sampling.These limiting factors were not accounted for in the analysis presented in §4.2; re-performing these simulations at R ∼ 120,000 and per-pixel SNR ∼ 400 (values representative of those optimistically achieved with current EPRV surveys, such as those conducted with EXPRES or NEID), none of the considered bisector diagnostics retain their correlation with the measured line velocity (not shown), and we find RMS RVs well exceeding the m s −1 level.This imprecision is not surprising, given that photon noise dominates compared to the variability produced by granulation in this regime. Of course, velocities are not measured from individual lines in RV surveys.Typically forward-modeling (e.g., Petersburg et al. 2020), a CCF-based approach (e.g., Pepe et al. 2002), or a line-by-line method (e.g., Dumusque 2018) is used to aggregate velocity information across lines in order to measure a precise RV.Schematically, it may be helpful to think of this methodology in terms of the central limit theorem: each line contains some amount of information about the bulk velocity of the star with some error (e.g., from photon noise).By measuring velocities from many lines together, one can estimate the "true" bulk velocity as the (weighted) sample mean of the individual line velocities. However, this picture begins to fall apart when one considers the effects of granulation on spectral lines.In reality, bisector shapes (e.g., Figure 2) and bisector variability (e.g., Figure 3) will differ across lines.As a result, the bisector of a CCF constructed from many disparate lines will not be representative of an underlying "true" bisector common to each line.Therefore, any underlying correlation between individual line profile (or individualline CCF) bisector shape and velocity (e.g., Figure 4) is not retained in the case of the spectrum-CCF bisector shape and velocity.Indeed, one recent work (Sulis et al. 2023) examined ESPRESSO CCFs (which are measured from thousands of disparate lines) for two bright stars (a G0V and F7V), and was unable to find significant correlations between CCF bisector curvatures and RVs, as in Cegla et al. (2019) and this work. Simulations of Binned Granulation Signals In principle, it may be possible to overcome this present limitation by using much more selective line lists in CCF measurements.I.e., constructing CCFs for families of lines that share very similar bisectors.To test this possibility, we used GRASS to synthesize time series con-sisting of many copies of the Fe I 5432 Å line (a moderately deep Fe I line with a classical "C"-shaped bisector that achieved a fractional reduction in its RMS RV representative of the median improvement among the lines studied in this work; see Table 4).Only the central wavelengths of these copies were varied; their convective blueshifts were identical, as well as the depths of the disk-resolved line profiles (the disk-integrated line depths varied ≲1% due to the wavelength scaling of the rotational broadening; see, e.g., Carvalho & Johns-Krull 2023).The copies were initially placed 4 Å apart, such that they were totally unblended.To mitigate any impact from aliasing with the discretized grid of wavelengths, the position of each line was perturbed by a (known) random offset drawn from a Gaussian distribution with width 0.1 Å.Because these lines were synthesized as copies of one another with known absolute convective blueshifts, they share a common time-averaged bisector shape and they also are perturbed in an identical manner in each snapshot of the simulation. Then, varying the spectral resolution, SNR, and number of identical lines considered, we calculated CCFs from which we measured velocities and bisector shape diagnostics.The resolutions considered are meant to reflect the (approximate) median resolutions of existing spectrographs: KPF (R ∼ 98,000 -Gibson et al.For the sake of comparison, we also ran simulations at R ∼ 350,000 to probe a fictionalized, best-case scenario as a fiducial.In all cases, the sampling was set at ∼4 pixels per LSF FWHM (i.e., twice the number of pixels necessary to satisfy the Nyquist criterion), and the width of the Gaussian CCF mask was set as the speed of light divided by the resolution.We found that changing the number of pixels per LSF FWHM to two or six pixels did not significantly impact our results.Velocities were measured from the resulting CCFs as described in §4.1.This process was performed repeatedly in order to measure sample means and standard deviations.The results of this exercise are visualized in Figure 5 for several spectral resolutions. First, we consider the results of the simulation where no granulation mitigation was performed (the left-hand panels of the plots in Figure 5).Across all spectral resolutions, the observed RMS RV is dominated by photon noise when the SNR and number of lines used in the CCF are low.In the most optimistic scenario, where the number of lines is large and the SNR is high, photon noise is essentially entirely averaged out and the "intrin- Note that the step sizes for SNR and number of lines are not of fixed size.Unsurprisingly, both the raw and mitigated RMS RVs improve as the SNR and number of lines are increased.Notably, though, the achievable mitigation is strongly dependent on spectral resolution -only the three highest resolution simulations achieve statistically significant reductions in the RMS RV (see Figure 6 and associated text).Combinations of SNR and number of binned lines denoted with a "-" showed no significant improvement in RMS RV; none of the three lowest-resolution simulations (R ∼ 98,000, R ∼ 120,000, and R ∼ 137,000) achieved significant improvements in RMS RV for any combination of SNR and number of binned lines and so are not shown.It is plausible that ESPRESSO in UHR mode and PEPSI could demonstrate a retrieval (and correction of) convective variability for a bright star if enough similarly-varying lines could be identified and binned, as discussed in §4.3.2. sic" variability of the line from granulation entirely dominates the RMS RV.It is notable that the higher spectral resolution simulations do not necessarily perform better when no granulation mitigation is attempted, except in the most extreme scenarios where the SNR and number of lines are quite low.Beginning at moderate SNR and number of lines (e.g., SNR ∼ 400 and 200 lines) and up to the highest SNR and number of lines, the measured RMS RVs improve by only several cm s −1 at most.For each combination of SNR, spectral resolution, and number of lines, we also attempted to construct a correlation between the measured velocities and tuned CCF BIS (since the tuned BIS generally outperformed all other considered diagnostics).The linear fit to these quantities was then used to correct the measured velocities, from which the final RMS RV was measured and reported.The results of this procedure are shown in the right-hand panels of Figure 5, and the percent improvements in RMS RVs (with propagated errors) are shown in Figure 6 for the subset of simulations that achieved meaningful improvement.Notably, significant mitigation via bisector diagnostic correlation can only be achieved at higher spectral resolution.At the three lowest considered resolutions (R ≤ 137,000), no scientifically significant improvement in the RMS RV is achieved, even at the maximum considered SNR and number of lines.At R ∼ 190,000, the improvement in RMS RVs is at the ≲20% level in the best-case scenario, perhaps approaching detectability.At the highest considered resolutions, the most optimistic improvement in the RMS RV is consistent with that achieved in the fullresolution, noiseless simulation laid out in §4.2.2.We discuss some considerations for attempting the observational retrieval of such signals below. Practical Considerations for Existing Instruments The analysis presented in §4.3.1 and shown in Figures 5 and 6 suggests that existing instruments might might be able to capture (and correct for) granulationdriven variations in line shape.Whether this can be performed in reality will depend on a few factors.First, signal must be binned across lines which are perturbed by convection in a similar-enough manner.Some initial work (Dravins et al. 2017a,b) has already successfully shown that "similar" Fe lines can be binned in order to enable retrievals of time-averaged spatially resolved stellar line profiles during planet transits.Additionally, HD simulations performed by Dravins & Ludwig (2023) show that different absorption lines fluctuate in phase (with some dependence on line strength and spectral region), suggesting that it is indeed plausible that convective variability can be coherently binned across lines. Second, the requisite SNR of the unbinned spectra may change depending on the number of lines that are available for binning.For the science case demonstrated in Dravins et al. (2017b), binning only 26 lines was sufficient; resolving convective variability in other stars will require much greater precision.Stenflo & Lindegren (1977) list over 400 particularly clean Fe lines in the 400-686 nm range in the solar spectrum, so it is opti-mistically plausible that multiple hundred lines could be available for binning, even after accounting for practical details such as loss due to detector gaps and lines falling in the edges of orders.It is apparent from Figure 6 that an implausible number of lines (>500) would need to be binned to compensate for even moderate SNRs.We caution that observations should (at least initially) target the highest achievable SNR (therefore favoring brighter stars), since the number of lines available for binning is not precisely known at present. Finally, the resolution of the observing instrument appears to be the strongest constraint on the detectability of convective variability.As discussed in §4.3.1, the three lowest-resolution simulations achieved no reduction in RMS RV regardless of the SNR or number of binned lines.Optimistically, ESPRESSO in UHR mode has sufficient resolving power to detect this variability; prospects for PEPSI are slightly more promising.Additional gains can be made beyond the resolution of PEPSI, though increasing the resolution much further may offer diminishing returns (in addition to the growing challenge of reaching adequate SNR at such high resolutions).We discuss some considerations for the design of future instruments in §5.2.1. We caveat that these simulations are not survey or truly realistic observation simulations and should not be interpreted as such.Notably, the simulated SNR is divorced from exposure time, and the instrument LSFs are modeled as simple Gaussians.Mirror size, throughput, limiting magnitudes of the observed stellar sample, etc. vary greatly across instruments, and it is beyond the scope of this study to consider these myriad factors.Instead, we emphasize that these simulations were carried out in order to assess how photon noise and spectral resolution interact and affect attempts to measure and mitigate granulation signals encoded in the shapes of lines.As shown by this exercise, retrieving these signals given realistic observing and engineering constraints will be quite challenging.Despite this challenge, we believe that recovering these granulation signals may be possible at present, and perhaps a requirement for achieving ∼10 cm s −1 wholesale RV precision, as discussed further in the following section. DISCUSSION Recent studies, particularly Meunier et al. (2023) and Lakeland et al. (2024), have shown that granulation and supergranulation will pose significant barriers to the detection of Earth-mass exoplanets, even in very magnetically inactive stars.Assessments of optimal observing strategies for mitigating granulation and supergranulation (particularly Dumusque et al. 2011 andMeunier et al. 2015) agree that binning multiple observations over timescales larger than the typical (super)granule lifetime perform better than consecutive long exposures, but disagree on the absolute effectiveness of brute-force binning alone as a mitigation strategy.As an alternative to a "beating-down-the-noise" approach, Cegla et al. (2019) used MHD-driven Sun-as-a-star simulations to show that various summary statistics capturing changes in individual line bisectors strongly correlate with the anomalous, granulation-induced velocity shift.To complement these previous studies, we have presented and used v2.0 of the GRASS tool to empirically synthesize stellar line profiles with perturbations from granulation.In the following subsections, we discuss the various insights into granulation mitigation that simulations from GRASS have offered.We additionally comment on what further work will be needed to enable and implement these mitigation methods in EPRV surveys. Need for Broader Studies of Line-by-Line Variability In §4.1, we used GRASS to empirically measure the intrinsic, granulation-driven variability for the 22 solar lines shown in Figure 1.As shown in Figure 3, the most variable line exhibits a ≳70 cm s −1 RMS RV, and the least variable line ≳40 cm s −1 RMS RV.That different lines exhibit different degrees of variability is unsurprising; past works have shown that measured RVs change with the sets of lines used to measure velocities (e.g., Meunier et al. 2017;Al Moulla et al. 2022, 2024).Lines that trace global activity have been the focus of many studies and good progress has been made toward understanding the physical mechanisms underpinning their ability to trace this activity (e.g., Vitas et al. 2009;Wise et al. 2022;Cretignier et al. 2024). In comparison to studies of global activity, the magnetoconvective variability of individual lines has been somewhat understudied.Studies utilizing HD and MHD codes (e.g., Dravins et al. 2017a;Cegla et al. 2019;Dravins & Ludwig 2023) have made important contributions to our understanding of this "microvariability" (borrowing the language of Dravins & Ludwig 2023), but are limited by computational costs and our current lack of disk-resolved line profiles for other stars to use as a basis for validation.A growing number of works have catalogued and investigated the absolute convective blueshift of lines both in the Sun and in other stars.Studies of the Sun have noted trends in convective blueshift with line depth (e.g., Reiners et al. 2016b), temperature (Al Moulla et al. 2022), and wavelength (Ellwarth et al. 2023a).Studies of other stars have shown that the gradient of convective velocities increases with stellar effective temperature (e.g., Gray 2009 andLiebing et al. 2021).Though it is possible to estimate an approximate granulation noise amplitude with simple stellar scaling relations (Dalal et al. 2023), previous observational studies have thus far not probed the temporal variability in the blueshifts of individual lines, a key focus of this work. Looking beyond the characterization of the 22 lines presented in this work, future studies should attempt to characterize the convective jitter in individual lines en masse.Such works could use existing solar data sets (e.g., solar observations from HARPS-N, NEID, and/or KPF) to empirically measure these jitters, though the lower spectral resolutions of such instruments may make direct measurement difficult for individual lines.Separating the influence of global and/or localized magnetic activity may also prove quite challenging.Despite these challenges, achieving a synoptic understanding of stellar velocity fields, especially the convective velocity field, will be a key goalpost toward the detection of Earth analogs. Promise and Limitations of Correction with Bisector Diagnostics Cegla et al. (2019) found that various bisector diagnostics (particularly the bisector inverse slope BIS, bisector amplitude a b , and the bisector curvature C) could be used to remove 50-60% of the RV noise from granulation in their synthetic Fe I 6302 Å profiles.They noted, however, that the strength of the correlations used to achieve this reduction would likely change line-to-line and star-to-star.To expand on this study, we have performed this same exercise for the 22 solar lines studied in this work.Following Cegla et al. (2019), we used both the canonical definitions of these bisector diagnostics (given in §4.2) and definitions tuned to maximize the correlation for each given line and diagnostic. As Cegla et al. (2019) predicted, the canonical definitions of BIS and C need to be tuned to each line to maximize their predictive power.In general, we found that the best-performing bisector diagnostics could be used to improve the RMS RV for a given line by ∼25-35% (see Table 4).Cegla et al. (2019) were able to correct their observed variability at the 50-60% level in their most optimistic scenario, but it should be noted that their raw RMS RV was ∼10 cm s −1 , artificially low likely as a result of the enhanced magnetic field in their simulations (see discussion in §2.3 of Cegla et al. 2019).Although our best-case fractional improvement in line RMS RV is generally lower than that found by Cegla et al. (2019), our absolute improvement is relatively large.E.g., we achieve in excess of 20 cm s −1 for some lines using the optimized bisector diagnostics. Interestingly, none of the diagnostics were able to correct the RMS RV of any line to below ∼30 cm s −1 , suggesting that not all of the observed RV variability is driven by changes in line asymmetry.Pure Doppler shifts of bisectors will produce no change in the considered bisector diagnostics (modulo small measurement errors owing to pixelization), and so the remaining RV variability likely constitutes an upper limit on the net RV shift introduced by granulation.Of course, the bisector-diagnostic correlations are imperfect, and so some asymmetry-driven variability likely remains.Instrumental jitter probably also contributes to this remaining variability (see §2.1 and Löhner-Böttcher et al. 2017).It is possible to envision more advanced methods for correcting the asymmetry-driven variability (which would enable tighter constraints on the purely shiftdriven granulation noise), but limitations introduced by finite sampling and photon noise will probably necessitate some degree of averaging within the bisector (as in the definitions of BIS and C). If the residual, uncorrected RMS RV observed is indeed created primarily by pure shifts in bisector position, then another strategy may need to be devised to cope with this variability.One plausible avenue is in binning or smoothing of measurements over time.Although binning alone will likely not suffice as a granulation mitigation strategy (Meunier et al. 2015), in reality some combination of diagnostic-based granulation mitigation and averaging could be employed.As Cegla (2019) note in their conclusion, the path forward likely lies in a combination of empirically and theoretically motivated strategies. Overcoming Observational Constraints The correlations between bisector diagnostic and anomalous RV will not be observable for individual stellar lines given constraints introduced by the spectral resolution and typical SNR achieved in current instrumentation.These same limitations introduce large errors in the RVs measured from individual lines; in practice this is overcome by binning signal across lines via a forward-modeling, CCF, or line-by-line approach.However, as currently implemented, these techniques are not well-suited for measuring changing convective velocities: whereas a wholesale motion of the star will create a uniform Doppler shift in every line, changes in convection will manifest differentially in each line. Although it is beyond the scope of this work to devise new methods for binning granulation signals across lines, we carried out an exercise in §4.3 to show that, in principle, these signals can be aggregated and retrieved.Synthesizing spectra consisting of varying numbers of identical lines with only differing central wavelengths at varying SNRs (owing to photon noise alone) and spectral resolutions, we found that correlations between CCF BIS and RV could be retrieved under reasonably realistic (if slightly difficult) conditions.Of the existing suite of high-resolution RV spectrographs, it is most plausible that ESPRESSO (in UHR mode) or PEPSI could resolve the sub-m s −1 stellar line-shape variations that characterize granulation-driven jitter, assuming an adequate number of similarly varying lines can be identified and binned.As discussed in §4.3.2, a recent work based on HD simulations (Dravins & Ludwig 2023) demonstrated that subsets of lines do indeed vary in phase, motivating our claim that variability in granulation signals would need to be coherently binned across lines.Line lists derived from or informed by such HD simulations could provide a valuable starting point for observational studies of convective variability in other stars. Looking forward to future generations of instruments, particularly in the coming age of extremely large telescopes (ELTs), we emphasize that spectral resolution is fundamental to resolving the signals that granulation encodes in the shapes of spectral lines.Though the velocities that fall out of these changes in line shape and position are largely treated and discussed as noise within the EPRV community and literature at present, it is important to recognize that mitigating their impact on the measurement of cm s −1 -precise bulk motions of stars likely lies in treating them as signals that vary somewhat from line to line.As shown in §4.3 and Figure 5, the spectral resolution of an instrument determines its ability to resolve perturbations in the shapes of lines.In order to resolve this variability, future instruments should target higher resolutions: ESPRESSO in UHR mode (at median R ∼ 190,000) currently sets the bar for future instruments to surpass.As Dravins et al. (2017b) argue, such ultra-high resolution instruments would enable extremely precise and novel studies of both stars and planets. Limitations and Caveats of GRASS As discussed in §6.2 of Palumbo et al. (2022), GRASS has limitations that should be considered both in the context of this study and before using GRASS in future works.Principally, GRASS uses solar data to construct line profiles and spectra.The time-average shape of bisectors is known to be highly sensitive to the structure of the stellar atmosphere, and is therefore a strong function of the stellar spectral type and luminosity class (Gray 2008).An independent limitation of GRASS is imposed by the length of the time series used as input to the simulation.These observations are described in greater detail in §2 of Löhner-Böttcher et al. (2019); but, in brief, the minimum time baseline for most observation was ∼40 minutes.Consequently, GRASS is sensitive to frequencies corresponding to the maximum realistically expected lifetime of granules (see Hirzberger et al. 1999), but is completely insensitive to longer timescale phenomenon, namely supergranulation.Because of this limitation, we refrain from running simulations of smoothing and binning as in Meunier et al. (2015), instead focusing on lineby-line variabilities and bisector-diagnostic-based mitigation techniques. CONCLUDING REMARKS Compared to other drivers of intrinsic stellar variability, methods for mitigating granulation-driven RV variability are comparatively poorly developed.Because stellar granulation precipitates a ∼30-70 cm s −1 RV noise source (depending on the line or lines measured), current and upcoming RV surveys will need to develop methods for mitigating the effects of granulation in order to detect true Earth twins, i.e., Earth-mass planets orbiting in the habitable zones of Sun-like stars.In this work, we: 3. Quantify the line-by-line RV variability in 22 solar lines, showing that there is ∼30 cm s −1 difference between the most variable and least variable lines; 4. Show that diagnostics of bisector shape generally correlate with the granulation-induced RV (consistent with the MHD-driven simulations of Cegla et al. 2019), and can be used to remove 25-35% of the measured granulation noise; 5. Demonstrate that although these correlations can't be retrieved for individual lines at the resolutions and typical SNRs achieved by existing spectrographs, informed and selective binning of similar lines can overcome these limitations; 6. Show that on the basis of their ultra-high spectral resolution, ESPRESSO (in UHR mode) and PEPSI are the best-suited existing spectrographs to demonstrate a retrieval of such line-shape correlations; 7. Argue that future spectrographs should target ultra-high resolutions of R ≳ 190,000 in order to resolve convective variability in lines; 8. Emphasize that granulation encodes a signal in the shapes of lines, and that future instrument builders should be mindful of the high spectral resolution needed to faithfully resolve them. in normalized flux, consistent with the methods used by Gray (1988) and Dall et al. (2006).As in Palumbo et al. (2022), we fit separate Voigt profiles to the red and blue wings of each line in order to measure a smooth line width into the continuum (because the deep solar lines of interest are often increasingly blended in their wings).We use a Voigt profile, rather than e.g., a Gaussian, since the different lines are shaped quite differently in the wings and core (see Figure 1).Voigt profiles are able to capture this diversity in shape more flexibly and accurately.The effects of this approach are discussed specifically for the Fe I 5434 Å line in §4.1 of Palumbo et al. (2022) and generally in §3.1. Like the width measurements, the bisector measurements become fraught near the continuum, owing both to blends and measurement uncertainty which is inversely proportional to the first derivative of the spectrum at the intensity of interest (Gray 2008).To compensate for this, we fit a polynomial to the lower 80% of each bisector and model the remaining top 20% as the extrapolation of the best-fit polynomial.Similarly to the fitting procedure employed by Zhao & Dumusque (2023), we use a third-order polynomial for bisectors at µ > 0.4, and a first-order polynomial for bisectors at µ ≤ 0.4.Example disk-resolved spectra with model fits and input data with extrapolations are shown in Figure 7. Rather than writing the pre-processed data for each limb position to separate FITS files, as in GRASS v1.0 (Palumbo et al. 2022), we instead write all data for a given line to an HDF5 file, in order to reduce the storage size of the input data and excess I/O latency during synthesis.The input data are available in a Zenodo record (Palumbo et al. 2023b), which GRASS automatically downloads upon installation.GRASS includes additional functions used to read and manipulate the data stored in these files.These functions are described in the package documentation available on GitHub.As explained in §3.2.3 and §3.2.4 of Palumbo et al. (2022), v1.0 of GRASS synthesized disk-integrated line profiles by integrating over an evenly sampled 2D stellar grid projected on the sky plane.To enable more complex modeling of stellar geometries and to be consistent with other Sun-as-a-star simulations, v2.0 of GRASS now tiles the surface of a sphere in stellar longitude and latitude, following from the procedures used in Vogt et al. (1987) and Piskunov & Kochukhov (2002).In brief, we first divide the star into N ϕ latitude slices.The number of longitudes sampled in each latitude slice is then given by: A.2. Changes to Model Grid where ϕ is the latitude at the center of a given latitudinal slice and ∆ϕ the latitude span of the slices.In each slice, N θ is rounded up to the nearest whole number.As noted in Vogt et al. (1987), this tiling scheme produces tiles whose areas varies minimally with latitude, except near the poles where the tiles become triangular.Tiles that are located entirely on the invisible hemisphere of the star do not contribute to the integrated flux.Because the tiles are no longer of equivalent projected area (as in GRASS v1.0, Palumbo et al. 2022), we now weight the contribution of each tile to the disk-integrated flux by the projected area of each tile, in addition to limb darkening.An example grid with color-coded weights is shown at left in Figure 8.The number of longitudinal elements is set by the latitude increment ∆ϕ, which is equal to 180 • /N ϕ ; consequently, we parameterize the resolution of the spatial grid in terms of only N ϕ .In previous works that have implemented the Vogt et al. (1987) tiling scheme (e.g., Reiners et al. 2016a), N ϕ is customarily set to some very large number in order to minimize errors introduced by the discretization of the stellar surface elements.However, as explained §4.2 of Palumbo et al. (2022), the resolution of the spatial grid must correspond to the (average) angular size of observed patches in Löhner-Böttcher et al. (2018) andLöhner-Böttcher et al. (2019); i.e., there is an optimal N ϕ that cannot be freely chosen.With our modified tiling procedure, we find that N ϕ = 197 yields tile sizes corresponding to the intensity-weighted average area of the observed patches.As in §4.2 and Figure 4 of Palumbo et al. (2022), we have verified that this resolution produces an RMS RV broadly consistent with those observed by Elsworth et al. (1994) and Pallé et al. (1999), with the exact RMS RV depending on the input data used to synthesize the spectra (see §4.1). Generally, at N ϕ = 197 the star is not tiled densely enough and appreciable (∼m s −1 ) errors in the rotational velocity are introduced by the sparse tessellation.To circumvent this problem, we follow the procedure developed by Cegla et al. (2019) and compute the limb darkening, projected rotational velocity, and projected areas in each tile as the (weighted) average of those values computed on a 40-by-40 grid of sub-tiles.Within each larger tile, the sub-tiles are evenly spaced in stellar latitude and longitude.The limb-darkened intensity assigned to each large tile is then given as the mean intensity across the corresponding sub-tiles and the rotational velocity as the weighted mean of the sub-tile velocities.This sub-tiling scheme is illustrated at right in Figure 8. To calibrate the necessary number of sub-tiles, we computed the disk-summed rotational velocity and projected tile area as a function of the number of sub-tiles.Ideally, as the number of sub-tiles is increased, the sum of the tile rotational velocities should tend to zero, and the sum of the projected areas should approach π.With a 40-by-40 grid of sub-tiles in each larger stellar tile, we find that the disk-summed rotational velocity error is ∼3.7 × 10 −9 m s −1 ; the error in the total projected area of the disk is ∼5.9 × 10 −9 .To assess the impact of these errors on the synthesized line profiles, we compared line profiles generated with an arbitrarily high density of sub-tiles (1600-by-1600 grid) and with the standard 40-by-40 grid.Taking the high-resolution line profile as the fiducial, the flux errors did not exceed ∼8.9 × 10 −10 ; likewise, the line velocity errors were sufficiently small at 8.9 × 10 −7 m s −1 . A.3. Treatment of Convective Blueshift Stellar absorption lines are known to exhibit convective blueshift that varies with both limb angle (Löhner-Böttcher et al. 2018, 2019) and depth (Reiners et al. 2016b;Ellwarth et al. 2023b).By default, GRASS uses only the convective blueshifts in the solar observations in its construction of line profiles (as is the case for the simulations and analysis presented in this work).However, if GRASS is used to model a line of arbitrary depth (following the procedure described in §3.2.2 of Palumbo et al. 2022), it may also be desirable to artificially prescribe the disk-integrated convective blueshift of the synthetic line profile.To enable this modeling, GRASS now optionally includes an additional convective blueshift term in order to reproduce the solar scaling relation given in Reiners et al. (2016b).Specifically, we use Equation 2 where ∆v conv is measured in units of m s −1 .In implementation, GRASS draws the appropriate convective blueshift from a look-up table tabulated from this polynomial relation, rather than evaluating the polynomial for each synthetic line. A.4. GPU Implementation Residuals for each of the GPU precisions are shown in the middle and bottom panels.Marginal distributions of the residuals are shown at the right of the middle and bottom panels.Note the differences in y-axis scales between each panel.Compared to the double-precision synthesis, which is accurate to several parts in 10 15 , single-precision synthesis is only accurate to a few parts in 10 3 as the result of catastrophic cancellation in repeated interpolation calculations.This effect is most pronounced closer to the line core, where many repeated interpolations and in-place multiplications are performed, compared to the line wings and continuum.Because flux errors at the level of 1 × 10 −3 propagate to velocity errors of order ∼several cm s −1 , we strongly advise against the use of the single-precision GPU implementation of GRASS.Panel (b): GRASS performance by implementation architecture.GPU benchmarks were performed with an NVIDIA Tesla V100S GPU.For these benchmarks, the size of the synthesized spectrum (i.e., number of lines) was varied, with the number of time steps fixed at Nt = 50, and the number of latitude slices fixed at N ϕ = 197.Even for smaller problem sizes, the GPU implementation outperforms the CPU implementation because of the large overhead incurred by computing the necessary geometrical parameters and weights (see §A.2) on the CPU.The double-precision GPU implementation achieves a factor of ≳100 speed-up for larger problem sizes.Owing the large loss in accuracy in the Float32 implementation (see Figure 9), we do not recommend use of single-precision synthesis with GRASS.Palumbo et al. (2022) presented v1.0 of GRASS, which synthesized spectra serially on a CPU.Depending on the number of spectral resolution elements and the number of time steps in the synthesis, the required computation time could become quite large.As shown in another recent work (Zhao & Dumusque 2023), thoughtful use of parallelization can greatly increase the performance and usability of stellar modeling codes, particularly owing to the performance boost enabled by performing the necessary interpolations (see §3.2.3 of Palumbo et al. 2022) of spectra en masse.In order to compute spectra more efficiently, we have implemented a GPU version of GRASS. The GPU kernels are written in Julia using the CUDA.jl(Besard et al. 2018b(Besard et al. , 2019) ) package for use with NVIDIA GPUs.Multiple steps of the updated synthesis procedure have been parallelized.Specifically, the spectral synthesis computations and interpolations for each spatial grid cell and wavelength element are performed in parallel, in addition to the computation of weights governed by the viewing geometry, limb darkening, and differential rotation of the model star.The same high-level functions used to generate synthetic spectra are able to interface with the GPU kernels such that no knowledge of GPU computation is needed by users of GRASS to take advantage of this implementation.As in the CPU implementation, the computation of the disk integrated spectrum is performed in place to avoid excess memory allocation, which could otherwise exceed the available VRAM at moderate problem sizes. As shown in Panel (a) of Figure 9, the double-precision GPU implementation of GRASS produces flux values that are accurate to the CPU implementation within about ∼several parts in 10 15 when using the same random number generator (RNG) seed.Due to catastrophic cancellation arising in the interpolation step of the spectral synthesis (described in §3.2.3 of Palumbo et al. 2022), the flux values produced by the single-precision GPU synthesis are accurate to the CPU implementation at only about a couple parts in 10 3 .Propagating these errors in flux to velocity, the singleprecision flux error produces a velocity error of order ∼several cm s −1 , whereas the double-precision GPU velocity error amounts to only ∼1×10 −10 m s −1 .Given the appreciable velocity error in the single-precision implementation, only the double-precision GPU implementation of GRASS was used for computations and results presented in this work, and we strongly discourage the use of single-precision GPU implementation for synthesizing spectra with GRASS.To prevent unwitting single-precision computations, GRASS will throw a warning if any GPU allocations are made with single-precision floats. To assess the relative performance of the CPU and GPU implementations, we performed benchmarks of GRASS with a 2.5 GHz Intel Xeon Gold 6248 CPU and a NVIDIA Tesla V100S GPU running with CUDA Driver Version 550.54.14 and CUDA Toolkit Version 12.4.Because the GPU implementation performs parallel computations over the spatial and spectral grid cells, we evaluated performance as a function of the number of spectral resolution elements (or somewhat equivalently, the width of the spectrum in units of length), because the number of spatial grid cells should be fixed for physical validity (see §4.2 of Palumbo et al. 2022).The number of time steps in the simulation was held fixed at N t = 50, corresponding to 12.5 minutes of time given the time step of 15 seconds.The results of these benchmarks are shown in Panel (b) of Figure 9.For larger problem sizes, the GPU implementation outperforms the CPU implementation by a factor of about 100×. B. NOTES ON SPECIFIC LINES In Palumbo et al. (2022), we demonstrated that GRASS accurately reproduced the line profile and bisector of the FeI 5434 Å line by comparison to the IAG Solar Atlas (Reiners et al. 2016b).In this work, we have performed this semi-quantitative comparison for all 22 lines used in the GRASS template line library (see §3.1).An example set of comparisons is shown in Figure 10; line profile and bisector comparison plots are available for all 22 lines in the online journal.Below, we provide comments on specific details relevant to each line.For additional discussion of the convective blueshift of these lines, as well as their historical use in the heliophysics literature, refer to Löhner-Böttcher et al. ( 2018 B.2. Lines around 5381 Å Four lines in this region were studied: Fe I 5379 Å, Ti II 5381 Å, Fe I 5382 Å, and Fe I 5383 Å.A fifth line, C I 5380 Å, was noted but excluded from the analysis owing to its very modest depth.Two lines, Fe I 5379 Åand Ti II 5381 Å, exhibit classical "C"-shaped bisectors.Fe I 5383 Å, on the other hand, has a "\"-shaped bisector which are typically observed for disk-resolved bisectors near the limb.In this case, as noted by Löhner-Böttcher et al. (2019), the strong blueward trend is caused by blends in the blue wing of the line.Lastly, I 5382 is quite shallow, but its bisector is suggestive of the upper region of a "C"-shaped bisector.GRASS models Fe I 5379 Å and Ti II 5381 Å quite well, except for diminished curvature in the top ∼20% of the Fe I 5379 Åline.GRASS also reproduces the "\"-shaped bisector of Fe I 5383 Å quite well up to ∼60% of the continuum.However, the shape of the bisector of this region is shaped by the cores of weak lines in the wing of Fe I I 5383 Å; small changes in the relative depths of these lines (e.g., from activity, as the Sun was more active during the observation of the IAG Atlas; see Hathaway 2015 andReiners et al. 2016b) could lead to this discrepancy.Lastly, GRASS underestimates the amplitude of the shallow Fe I 5382 Å bisector.Being the shallowest line that we model with GRASS (with fractional depth ∼0.2), the amplitude underestimate is not entirely surprising.We note that the mismatch between the IAG and GRASS bisectors in general becomes most extreme above ∼80% continuum flux; the entirety of this line is above this level. B.3. Lines around 5434 Å Six lines in this region were studied: Mn I 5432 Å, Fe I 5432 Å, Fe I 5434 Å, Ni I 5435 Å, Fe I 5436.3Å, and Fe I 5436.6 Å. Fe I 5434 Å was the line modeled in the initial presentation of GRASS in Palumbo et al. (2022).Lines with classical "C"-shaped bisectors included Fe I 5432 and Fe I 5434.Lines whose bisectors visually correspond to the upper half of the classical "C" include Mn I 5432 Å, Fe I 5436.3Å, and Fe I 5436.6 Å.As discussed in §4.2.3, the Mn I 5432 Å line is particularly broad as the result of the prodigious hyperfine structure of Mn.Ni I 5435 Å is blended in the wing such that it's bisector is "\"-shaped, as seen for Fe I 5383 Å. GRASS generally models these lines well (with only very small differences in the curvature of bisectors).Somewhat notably, GRASS underestimates the curvature in the upper region of the "C" for Fe I 5434 Å, but this discrepancy is due to blends in the red line wing of the line which were modeled out in the pre-processing of the LARS data (as discussed in greater detail in §4.1 of Palumbo et al. 2022). B.4. Lines around 5578 Å Two lines in this region were studied: Fe I 5576 Å and Ni I 5578 Å.Both lines exhibit classical "C"-shaped bisectors which are well-modeled by GRASS.However, GRASS does somewhat underestimate the curvature of Fe I 5576 Å above ∼70% flux. B.5. Lines around 6150 Å Two lines in this region were studied: Fe II 6149 Å and Fe I 6151 Å.Both lines exhibit bisectors which visually correspond to the upper ∼half of the classical "C" shape.Considering the modest depths of these lines (especially so in the case of Fe II 6149 Å), GRASS models the bisectors of these lines remarkably well. B.6. Lines around 6171 Å Four lines in this region were studied: Ca I 6169.0Å, Ca I 6169.5 Å, Fe I 6170 Å, and Fe I 6173 Å.Two lines, Ca I 6169.0Å and Fe I 6173 Å, exhibit classical "C"-shaped bisectors that are well modeled by GRASS.Fe I 6173 Å is notably one of the most magnetically sensitive lines in the visible portion of the spectrum, and such has been extensively used to study the solar line-of-sight magnetic field (e.g., the Helioseismic Magnetic Imager of the Solar Dynamics Observatory, Scherrer et al. 2012).GRASS does somewhat underestimate the curvature of this line.The remaining two lines, Ca I 6169.5 Å and Fe I 6170 Å, are blended such that their bisectors deviate from the classical "C" shape.Ca I 6169.5 Å exhibits a "\"-shaped bisector as in Ni I 5435 Å and Fe I 5383 Å. Fe I 6170 Å exhibits a bisector unlike those of the other blend lines studied in this work; it is shaped like a "/" up about ∼80% continuum before hooking back to the blue.GRASS faithfully reproduces the majority of these two bisectors, but deviates slightly in the uppermost regions. B.7. Lines around 6302 Å Two lines were studied in this last region: Fe I 6301 Å and Fe I 6302 Å.The Cegla et al. (2013Cegla et al. ( , 2018Cegla et al. ( , 2019) ) series of papers synthesized the Fe I 6302 Å, which has also been widely used in the solar physics.Both lines are otherwise widely used in the solar physics literature (see, e.g, the Introduction of Smitha et al. 2020 for a review).Both lines have classical "C"-shaped bisectors and are notably flanked by deep telluric oxygen lines.The bisectors of these lines are well-modeled by GRASS up to ∼70% of the continuum; above this level, GRASS notably underestimates the redward curvature of the bisector relative to the IAG bisector.It is plausible that these deviations are caused by variations in the strength of the oxygen telluric lines between the respective observing sites of the IAG Atlas (Göttingen, Germany; altitude ∼150 meters) and the LARS data used as input for GRASS (Teide Observatory, Canary Islands; altitude ∼2400 meters). Figure 1 . Figure 1.Collage of continuum-normalized, disk-center solar spectra originally observed for Löhner-Böttcher et al. (2018), Stief et al. (2019), and Löhner-Böttcher et al. (2019) with annotations for the lines used as input in this work.Lines denoted with a double dagger (C I 5380 Å and Na I 5896 Å) were included in the convective-blueshift analysis presented in Löhner-Böttcher et al. (2019), but are excluded from the analyses presented in this work, as explained in §2.Other lines are of telluric origin, significantly blended, and/or very shallow.Note the breaks in the wavelength axis; eight spectral regions spanning only a few Å each were observed.Spectroscopic properties and parameters for these lines are tabulated in Table1. Figure 2 . Figure 2. Ensemble of synthetic disk-integrated bisectors for the 22 lines analyzed in this work; compare to Figure 17.14 of Gray (2008).These bisectors were measured from significantly oversampled spectra synthesized at R ∼ 700,000 to ensure smoothness.Classical "C"-shaped bisectors are shown in the left-hand figure, and bisectors of deeply blended lines are shown in the right-hand figure.The velocity offsets in both panels of each figure are arbitrary and for illustrative purpose only.As in Figure 17.14 of Gray (2008), the bisectors are composited at right in each figure to show their (dis)similarity. Figure 3 . Figure 3. Line-by-line differences in granulation-induced variability.Left: Colored circles indicate the mean RMS RV measured from time-series of individual disk-integrated line profiles, following from the procedure outlined in §4.1.The colored diamonds show the mean residual RMS RV after using the tuned BIS to correct granulation velocities, as detailed in §4.2 and §4.2.2.In both cases, the error bars are the 1σ width of the RMS RV distributions measured from many trials, as described in §4.1.Right:The amount of variability (expressed as the RMS RV) exhibits no clear correlation with T 1/2 (see AlMoulla et al. 2022), or any other single quantity, including: wavelength, line depth, and potential of the lower and upper states of each transition (not shown).Although there is an appreciable difference between the most and least variable lines, it is not immediately apparent how the variability of a line could be predicted from the formation or atomic properties of a line. , Al Moulla et al. (2022) and Al Moulla et al. (2024) used solar and stellar data, respectively, to show that the measured RMS RV has a dependence on the formation temperature of the lines used.Following the methods of Al Moulla et al. (2022), we also computed their T 1/2 parameter (the temperature at the height in the atmosphere corresponding to 50% of a line's cumulative contribution function; see Figure 1 of Al Moulla et al. 2022) at the central wavelength of each line.The RMS RVs for each line are plotted against their T 1/2 in the right-hand panel of Figure 3.As with the other variables, we note no simple trend connecting a line's variability to its T 1/2 .Excluding the two lines with the highest variability (around T 1/2 ∼ 4700 K), one might plausibly deduce hints of the same "high-low-high" trend of RMS with temperature noted by Al Moulla et al. (2022) and Al Moulla et al. ( Figure 4 . Figure 4. Example correlations between bisector shape summary statistic and anomalous, granulation-induced velocity shift for a synthetic Fe I 5434 Å time series.Left: Time-averaged line bisector (opaque blue curve) and bisector variations (transparent blue curves).The amplitude of the bisector variations are exaggerated 50× for visual effect.The regions bound by the horizontal dashed lines and dotted lines represent the vt and v b regions used in the calculation of BIS, respectively.Middle: Correlation between BIS and apparent RV excursion with best-fit relation.Each point corresponds to one snapshot of the synthetic time series.Right: Same as middle panel, but for bisector amplitude a b . Figure 5 . Figure5.Results of the shape-based granulation mitigation simulation described in §4.3.Each set of plots corresponds to simulations of spectra observed at spectral resolutions approximately representative of those of existing instruments, except for the bottom-right panel which is meant to represent a fictionalized ultra-high-resolution instrument as the best-case, fiducial scenario.The RMS RVs reported in left-hand panels of each figure correspond to measurements performed without any granulation mitigation.The RMS RVs in the right-hand panels were calculated from RVs corrected using the procedure described in §4.3.1.The reported RMS RVs are the mean of many trials; the typical errors on the mean are at the ≲1.5% level (not shown).Note that the step sizes for SNR and number of lines are not of fixed size.Unsurprisingly, both the raw and mitigated RMS RVs improve as the SNR and number of lines are increased.Notably, though, the achievable mitigation is strongly dependent on spectral resolution -only the three highest resolution simulations achieve statistically significant reductions in the RMS RV (see Figure6and associated text). Figure 6 . Figure6.Percent improvement in RMS RV for a subset of the simulations described in §4.3.1 and shown in Figure5.Combinations of SNR and number of binned lines denoted with a "-" showed no significant improvement in RMS RV; none of the three lowest-resolution simulations (R ∼ 98,000, R ∼ 120,000, and R ∼ 137,000) achieved significant improvements in RMS RV for any combination of SNR and number of binned lines and so are not shown.It is plausible that ESPRESSO in UHR mode and PEPSI could demonstrate a retrieval (and correction of) convective variability for a bright star if enough similarly-varying lines could be identified and binned, as discussed in §4.3.2. 1. Present v2.0 of the GRanulation And Spectrum Simulator -GRASS -and document the changes and upgrades since v1.0 (Palumbo et al. 2022); 2. Verify that GRASS empirically reproduces the timeaveraged, disk-integrated line profiles and bisectors observed by Reiners et al. (2016b); Figure 8 . Figure 8. Tiling (left-hand plot) and sub-tiling (right-hand plot) of an example stellar surface viewed at slight inclination.In both plots, the tiles and sub-tiles are drawn artificially large for illustrative clarity.The coloring indicates the fractional weight of each tile normalized to the largest weight value; individual weights are given by the product of the limb darkening and projected area of the tile.The stellar equator and the zero-and ninety-degree meridians are overplotted as white dashed lines.Left: The stellar surface is divided into tiles according to the prescription developed byVogt et al. (1987).The number of latitudinal slices (and consequently the total number of tiles) is set such that the average tile area corresponds to the average angular size of the input observations, as described in §A.2.Right: Zoom-in on the tile highlighted by the blue border in the left-hand plot, with example sub-tiling.Because the grid resolution dictated by the angular size of the input observations would normally introduce discretization errors in the rotation velocity, limb darkening, and projected tile area computations, these quantities are calculated at the centers of sub-tiles within each larger tile (indicated by the black dots).As inCegla et al. (2019), the sub-tiles are spaced evenly in stellar latitude and longitude within each larger tile. Figure 9 . Figure9.Panel (a): Synthesis accuracy for the GRASS GPU implementation assessed against the CPU implementation fiducial.Spectra synthesized at double precision on a CPU and at both single and double precision on a GPU are shown in the top panel.Residuals for each of the GPU precisions are shown in the middle and bottom panels.Marginal distributions of the residuals are shown at the right of the middle and bottom panels.Note the differences in y-axis scales between each panel.Compared to the double-precision synthesis, which is accurate to several parts in 10 15 , single-precision synthesis is only accurate to a few parts in 10 3 as the result of catastrophic cancellation in repeated interpolation calculations.This effect is most pronounced closer to the line core, where many repeated interpolations and in-place multiplications are performed, compared to the line wings and continuum.Because flux errors at the level of 1 × 10 −3 propagate to velocity errors of order ∼several cm s −1 , we strongly advise against the use of the single-precision GPU implementation of GRASS.Panel (b): GRASS performance by implementation architecture.GPU benchmarks were performed with an NVIDIA Tesla V100S GPU.For these benchmarks, the size of the synthesized spectrum (i.e., number of lines) was varied, with the number of time steps fixed at Nt = 50, and the number of latitude slices fixed at N ϕ = 197.Even for smaller problem sizes, the GPU implementation outperforms the CPU implementation because of the large overhead incurred by computing the necessary geometrical parameters and weights (see §A.2) on the CPU.The double-precision GPU implementation achieves a factor of ≳100 speed-up for larger problem sizes.Owing the large loss in accuracy in the Float32 implementation (see Figure9), we do not recommend use of single-precision synthesis with GRASS. ) and Löhner-Böttcher et al. (2019).B.1.Lines around 5251 Å Two lines in this region were studied: Fe I 5250.2Å and Fe I 5250.6.As noted in Löhner-Böttcher et al. (2019), other lines in this region were extremely weak or significantly blended in the wings.Both Fe I 5250.2Å and Fe I 5250.6 exhibit classical, smooth "C"-shaped bisectors, which are well-modeled by GRASS. Figure 11 . Figure 11.Same as previous figure. Figure 12 . Figure 12.Same as previous figure. Figure 14 . Figure 14.Same as previous figure. Figure 16 . Figure 16.Same as previous figure. Table 1 . Parameters for spectroscopic lines in this work.Quantities denoted with a dagger ( † ) are from Löhner-Böttcher et al. Table 2 . RMS RV and 1σ variability in RMS RV for each line considered in this work; the data in this table correspond to the values shown in Figure3.Velocity quantities are rounded to cm s −1 precision. Table 3 . Summary of granulation mitigation potential for individual lines using various diagnostics of bisector shapes.For each line and bisector shape diagnostic, R is the median Pearson correlation coefficient between the measured line RV and considered bisector diagnostic.The corrected RMS RV and 1σ variability in the RMS RV are reported, and the percent improvement (rounded to the nearest whole number) over the "intrinsic" RMS RVs reported in Table2are shown for each line and bisector shape diagnostic.
21,406
2024-05-13T00:00:00.000
[ "Physics" ]
Structural Complexity and Informational Transfer in Spatial Log-Gaussian Cox Processes The doubly stochastic mechanism generating the realizations of spatial log-Gaussian Cox processes is empirically assessed in terms of generalized entropy, divergence and complexity measures. The aim is to characterize the contribution to stochasticity from the two phases involved, in relation to the transfer of information from the intensity field to the resulting point pattern, as well as regarding their marginal random structure. A number of scenarios are explored regarding the Matérn model for the covariance of the underlying log-intensity random field. Sensitivity with respect to varying values of the model parameters, as well as of the deformation parameters involved in the generalized informational measures, is analyzed on the basis of regular lattice partitionings. Both a marginal global assessment based on entropy and complexity measures, and a joint local assessment based on divergence and relative complexity measures, are addressed. A Poisson process and a log-Gaussian Cox process with white noise intensity, the first providing an upper bound for entropy, are considered as reference cases. Differences regarding the transfer of structural information from the intensity field to the subsequently generated point patterns, reflected by entropy, divergence and complexity estimates, are discussed according to the specifications considered. In particular, the magnitude of the decrease in marginal entropy estimates between the intensity random fields and the corresponding point patterns quantitatively discriminates the global effect of the additional source of variability involved in the second phase of the double stochasticity. Introduction Log-Gaussian Cox processes define a class of doubly stochastic Poisson processes [1] where the Gaussian intensity-generating function is transformed through exponentiation. These processes (see [2] for a formal definition and properties of log-Gaussian Cox processes) allow the generation of point patterns through a stochastic two-step procedure, where the clustering structure observed in the pattern is due to the inclusion of random heterogeneities in the intensity function. The first applications of these processes are attributed to Coles and Jones [3], who used a log-normal random field as a model of galaxies distribution, and Rathbun [4], who modeled the effect of external variables to describe the patterns formed by the location of organisms. Cox process models fit naturally into the geosciences and ecology fields [5] as the resulting point processes are considered to be driven by environmental variables. There are also contributions in the context of epidemiology [6], ecology [7,8], crime data analysis [9] and seismology [10], among others. The structural properties of random fields and point patterns can be characterized by means of informational and complexity measures. The concept of entropy, first defined in the context of Information Theory by Shannon [11], and generalized by Rényi [12], as the uncertainty contained in a probability distribution, can be used to quantify the degree of inhomogeneity of each phase of the process. Other informational measures, such as Kullback-Leibler divergence [13] and the corresponding generalization proposed by Rényi [12], are useful to determine the probabilistic local coherence of the phases in the sense of the structural information transferred from the intensity field to the point pattern. A similar analysis can be performed in the context of complexity-for instance, with López-Ruiz, Mancini and Calbet (LMC) measure of complexity [14]. The exponential extension of LMC complexity proposed in [15], although originally introduced to solve the problems that arise for continuous distributions, is also appropriate in the discrete case as it can be interpreted in terms of diversity [16,17]. A related two-parameter generalization, in terms of Rényi entropies of different deformation orders, was formulated by López-Ruiz et al. [18]. Under a similar product-type structure, the two-parameter generalized relative complexity measure introduced by Romera et al. [19], based on Rényi divergences, is used here to describe the local coherence in terms of complexity between the two phases of the doubly stochastic process mechanism. In the last few decades, since Papangelou's [20] work defining the entropy rate for continuous point processes in the real line, many other studies have introduced theoretical concepts in the context of Information Theory and complexity for the analysis of point processes. Baratpour et al. [21] assessed the properties of non-homogeneous Poisson processes in terms of entropy; Daley and Vere-Jones [22] extended the definition of entropy for a point process in a d-dimensional space, and more recently, Angulo et al. [16] introduced approaches to the analysis of spatial point patterns regarding informational and complexity aspects and focusing on a multifractal context. However, to our knowledge, complexity and information transfer between the two phases of generation of log-Gaussian Cox processes has not been explored. Many natural phenomena can be modeled by using the family of log-Gaussian Cox processes as the two phases of stochasticity allow us to fit point processes driven, in many cases, by environmental variables. When there is no random field involved, and thus only one phase is considered, we obtain, as a particular case, the family of inhomogeneous Poisson processes. There is a major fundamental difference between these two families and some classical second-order measures cannot clearly distinguish between them. The approach introduced here more deeply considers the system stochastic hierarchical structure, disentangling the two-phase mechanism and analyzing the internal transmission of information, thus highlighting the differences between both types of families. This has a potential effect on a number of applications. For instance, when the point pattern observed is driven in nature by some external environmental variables, this analytical perspective is useful for the assessment of the information that these covariates transfer into the pattern observed. This has immediate applications in crime science, forestry or environmental problems. We would emphasize at this point that our approach is parallel-complementary, in a certain sense-to a more classical analysis of spatial point patterns based on second-order tools. While, from the latter point of view, we try to detect spatial structure in the pattern, in the novel, former approach, we envisage structural information and its transmission through the different phases defining these processes. In summary, the main objective of this paper is to analyze the structural transfer of information from the intensity random field to the subsequently generated point pattern in a log-Gaussian Cox process. A marginal global assessment is performed in terms of entropy and complexity measures, and, complementarily, local correspondence is evaluated based on divergence and relative complexity measures. The study, addressed by simulation under a variety of selected scenarios, is primarily focused on sensitivity in relation to the configuration of model parameters, as well as concerning the specification of deformation parameters involved in generalized informational measures. Section 2 introduces preliminary concepts, both in reference to the class of Cox processes, the object of the present study, and to the information and complexity measures used for structural assessment. The methodological approach and related computational aspects are described in Section 3. The results of the marginal approach and related joint analyses, based on the estimation of information and complexity measures from simulation of the doubly stochastic mechanism, are presented in Section 4, highlighting the most significant aspects. A synthetic discussion in reference to the objectives proposed is provided in Section 5. Concluding remarks, with identification of some relevant open lines for continuing research, are given in Section 6. Preliminaries In this section, we present a summary of the theoretical concepts involved in our analysis. First, we refer to the class of spatial doubly stochastic Poisson processes. Second, we review the definitions and basic properties of some well-known information measures, as well as related complexity measures developed in the last few decades. Log-Gaussian Cox Processes Cox processes [2], called doubly stochastic Poisson processes, constitute an important class of spatial point pattern models useful for the representation of a rich variety of structural point dependency effects. In essence, a Cox process can be defined as a Poisson process with a random intensity function, which can be technically formalized in terms of a hierarchical two-step procedure: first, a non-negative random field Λ(x) is generated on a given continuous domain D ⊆ R 2 ; second, for the obtained realization λ(x), a Poisson process with intensity function λ(x) is built. For Λ(x) to be valid, it is required that each realization is integrable on bounded sets. This mechanism allows the incorporation of heterogeneities of an intrinsic random nature at the intensity level. Cox processes are widely used in practice due to their meaningful and practical construction. In many applications, it is appropriate to assume that the intensity generating random field can be modeled as a suitable function of a Gaussian random field. Since the probabilistic structure can be completely specified in terms of the first-and second-order moments of the latter, consequently, this assumption represents some advantages regarding inferential aspects and interpretations. Under this approach, research has been particularly focused on the class of log-Gaussian Cox processes, for which the intensity-generating random function is defined as where G(x) is a Gaussian random field, with the first-and second-order moments expressed, respectively, as In this paper, the widely used Matérn class [23][24][25][26] is considered as the covariance model for the intensity-generating Gaussian random field, due to its high flexibility and richness for the representation of a wide variety of stationary spatial dependence scenarios. This model is defined by the homogeneous and isotropic covariance function where K ν is the modified Bessel function of the second kind, σ 2 ≥ 0 is the variance of the Gaussian random field, and ν > 0 and ρ > 0, respectively, represent smoothness and scale parameters. Information and Complexity Information Theory arose in the context of Communication Theory for solving the emerging problems in message transmission through noisy channels. Based on the seminal concept of 'information content' introduced by Hartley [27], as a measure of the amount of information provided by the knowledge of the state in a finite system, Shannon [11] formulated 'entropy' (or 'information entropy') as a measure of the uncertainty intrinsic to a given discrete probability distribution, p = (p 1 , . . . , p N ), in terms of the expectation e., the expected information content). Shannon entropy is maximum for a system consisting of N equiprobable states, with H max = ln N. The reciprocal value given by the difference H max − H(p), generally normalized dividing by H max , is interpreted as 'redundancy' ( [11]). Among various well-known generalizations, formally derived by a certain relaxation of the intrinsic axiomatic, Rényi [12] entropy of order α (with α being a 'deformation parameter' on the probability distribution p) is defined by the expression This entropy is a decreasing function of the order α, and Shannon entropy is obtained as the limiting case for α → 1 (hence being also denoted as H 1 (p)). Rényi entropy is also maximum, for any α, in the case of equiprobability, again with H α,max = ln N. Correspondingly, the difference H max − H(p), or its normalization dividing by H max , is interpreted as 'redundancy of order α'. The exponential of Rényi entropy (in particular, Shannon entropy) can be seen as a 'diversity index of order α' representing, in a certain sense, the intrinsic number of states of the system according to the reference distribution ( [17]). The meaning and added value of Rényi entropy with respect to Shannon entropy is better understood in relation to the α-power distortion implied on the argument distribution. In particular, for α > 1, in a non-equilibrium distribution, increasing values of α tend to lead to higher probabilities, in a sensitive way that depends on the whole internal structure of the distribution. Conversely, for α < 1, decreasing α tends to equilibrate, in a certain way, as mentioned before, the reference distribution. Thus, the curve of Rényi entropies can be used for assessing and comparing systems that even may have equal Shannon entropy. Another important concept for the structural assessment of a random system is complexity, which, among other approaches introduced in the literature, has been specifically understood, in a probabilistic informational sense, as a departure from both degeneracy into one single state ('perfect order') and equiprobability ('complete disorder'). In this direction, López-Ruiz et al. [14] proposed the following formulation of a complexity measure (usually referred to as the 'LMC complexity'): for a given discrete probability distribution p, with the first factor being the Shannon entropy and the second one representing the disequilibrium defined as the quadratic distance We may remark at this point that, as occurs with most proposals of complexity measures, the widely used product-type formalism, and particularly the LMC complexity measure, has certain inherent limitations, and its interpretation as a quantifier of some specific aspects within the broad and multidimensional concept of 'complexity' must be restricted to the relative balance between the two factors involved (see, for instance, the critical discussion by [28]). For the case of continuous probability distributions, Catalán et al. [15] proposed a modified 'exponential' version of the LMC complexity (in the sense that Shannon entropy is replaced with its exponential), which was the basis for the formulation of a two-parameter generalized complexity measure, proposed later by López-Ruiz et al. [18]. In fact, the latter is perfectly meaningful also in the discrete case (see [16], for instance, in relation to [17] diversity index), for which it takes the form Therefore, this complexity measure quantifies, in the exponential scale (interpreted as diversity, as mentioned before), the sensitivity of the argument distribution to power distortion in terms of the increments of the Rényi entropy curve between two given values, α and β, of the deformation parameter. A comprehensive display is usually given in the form of an (α, β)-map, for selected deformation parameter ranges. While entropy and complexity measures enable a comparison in global terms (marginally) of two given probability distributions, a proper assessment of their structural (state by-state) dissimilarity, or lack of mutual coherence, is achieved by means of divergence and relative complexity measures. For two given probability distributions p = (p 1 , . . . , p N ) and q = (q 1 , . . . , q N ) on a system with N possible states, Kullback and Leibler [13] defined the divergence of p from q as This is a non-symmetric ('directed'), non-negative measure, vanishing if and only if p = q. A corresponding generalization based on a 'deformation parameter' is also given by Rényi [12] divergence of order α of p from q, defined as For fixed argument distributions, Rényi divergence is a non-decreasing function of the deformation parameter. For α → 1, H α (p q) tends to KL(p q) (hence with the latter being also denoted as H 1 (p q)). The exponential of Rényi divergence (in particular, Kullback-Leibler divergence) can be interpreted as a 'relative diversity index of order α' (see [16]). In the special case of the divergence of order α from equiprobability, i.e., with q ≡ 1 N , its value can be calculated as hence being known as the 'information difference' of order α for p. A divergence-based formulation of a product-type generalized relative complexity measure was introduced for continuous distributions by Romera et al. [19]. In the discrete case, it takes the form (see Angulo et al. [16], in relation to a concept of 'relative diversity') In particular, for q ≡ 1 N , the generalized complexity and relative complexity measures (1) and (2) are reciprocal in the following sense: Both for Rényi divergence and generalized relative complexity, the implications of the deformation parameter are directly related (similarly as mentioned for the cases of Rényi entropy and generalized complexity) to the power distortion effect derived on the two distributions involved; that is, the curve of Rényi divergences, and the corresponding map of generalized relative complexities, provide information about the sensitivity of divergence (as the directed distance between two distributions on a given system), or relative diversity in the exponential scale, for different parameter deformation values. A synthetic review, providing connective relations and interpretation of the abovesummarized information complexity concepts, is given by Angulo et al. [16]. Methodology As mentioned in Section 1, the analysis is aimed at characterizing, both in a global (marginal) and a local sense, the information transfer from the intensity field to the point pattern. To this end, an empirical approach based on the lattice box-counting methodology is adopted. Formally, the analysis is based on the simulation of log-Gaussian Cox processes for different scenarios, under different varying configurations of the covariance function and mean parameters of the intensity-generating Gaussian random field; see details in Table 1. For each specific configuration, in the first stage, M independent replicates of the intensity field are simulated on the square D = [0, 10] 2 , based on a 180 × 180 pixel window. From each realization, in the second stage, one or multiple point patterns are independently generated, as discussed below according to the objective of the analysis. Table 1. Parameter configurations for the scenarios considered in the analysis. Variable Parameters Fixed Parameters For assessment, the information complexity measures are applied considering, in particular, a 10 × 10 lattice overlaid on the domain D. The choice of this lattice resolution has been experimentally established with the aim of preserving a certain balance based on the cell size fixed, being large enough to reflect a spatial distribution coherent with the point pattern structure and, at the same time, adequately small regarding the smoothing effect derived on the intensity field realization (see, for example, Figure 1). = 1, . . . , M), is converted to a quadrat-based ('raster') mean intensity field by averaging its values within each cell (as displayed in Figure 2), and further transformed by normalization into a discrete probability distribution, denoted as q[m] = (q[m] i,j : i, j = 1, . . . , 10), hence representing the relative intensity for the quadrat design adopted; in the study performed here and in Section 4, we take M = 100. As for the point patterns, the relative frequencies obtained from intra-cell event counting provide the quadrat-based reference discrete probability distribution, denoted as p[m] = (p[m] i,j : i, j = 1, . . . , 10) for just one correspondingly generated pattern, or p[m; r] = (p[m; r] i,j : i, j = 1, . . . , 10), with r = 1, . . . , R, in the case of multiple patterns. Entropy (Shannon, Rényi) and complexity (LMC, generalized) measures are marginally evaluated on the related probability distributions derived, for the assessment regarding the characterization and global transfer of information between the two phases of stochasticity. In particular, according to the scenarios considered, the analysis of results is focused on sensitivity with respect to variations in the model and deformation parameters. Additionally, the degree of local (cell-based) coherence between the distributions corresponding to the two phases is analyzed on the basis of divergence (Kullback-Leibler, Rényi) and relative complexity (generalized) measures. The K-function plots displayed in Figure 3 show the effect of diverse settings of the covariance parameters in the spatial correlation of each pattern. Analysis based on information complexity measures is expected to reflect, according to model specifications, significant structural features in the internal hierarchical construction of the processes. As mentioned, from each simulation of the intensity field, either one or multiple independent point pattern replicates can be analyzed. The second strategy specifically seeks to discriminate the relative contribution from the two phases of stochasticity to the structural variability of the process. For a preliminary assessment regarding this issue, R = 100 independent patterns are generated from each one of M = 100 replicates of the intensity random field (hence having in total M × R = 10, 000 patterns), for varying values of the variance parameter according to Scenario 1 (see Table 1). Different standard deviation values of Shannon entropy and Kullback-Leibler divergence, summarized in Tables 2 and 3 . Clearly, in the special case where R = 1, the 'Mean Intra SD' is null and the three remaining quantities, 'Inter-Mean SD', 'Total SD' and 'Inter-Single SD', become equivalent.) In general terms, comparing the results for 'Total SD' and 'Inter-Single SD', we can see that the total variability of the entropy and divergence measures obtained from multiple patterns can be well captured with just a single pattern generated from each realization of the intensity field. Therefore, in the analyses performed in Section 4, we simulate patterns without replicates. Nevertheless, from the 'inter-intra' analysis based on multiple replicates, it is interesting to observe that, as the variance parameter σ 2 increases, the average internal variability of entropy and divergence values for the patterns derived from each intensity realization progressively decreases compared to a significant increase in the corresponding variability for the intra-averaged entropy values, and a more stable behavior for the divergence values. This is also visualized in the 95% confidence-level bands in the curves represented in the plots of Figures 4 and 5, respectively, for the mean entropy and mean divergence values obtained with and without multiple replicates. With the aim of providing a benchmark case, 100 independent realizations of a homogeneous Poisson process with constant intensity λ = 10, and similarly for a log-Gaussian Cox process with (pixel-based) white noise intensity, are simulated and analyzed. In the second case, the intensity generating the Gaussian random field has a local normal distribution with mean µ = 0 and variance σ 2 = 2 ln 10, generating an average intensity close to 10. These cases result in patterns with low structuring, establishing certain upper and lower bounds for entropy and complexity, respectively. The study is carried out using R software-in particular, the spastat and RandomFields packages for simulating the processes using the rLGCP function. The raster package is used for discretization of the intensity random field. Weak Structure Reference Processes Due to the trivial structure of the reference homogeneous and white noise intensitybased Poisson processes, significant inhibition or aggregation effects are not expected to be observed in the pattern realizations, i.e., points should be mostly evenly distributed throughout the spatial domain; as a consequence, Shannon and Rényi entropies (Figures 6 and 7) take high values, still showing a certain global gain in structuring, slightly increased in the white noise intensity case. Obviously, as the intensity field is constant for the homogeneous Poisson process, its quadrat-based distribution is perfectly uniform and entropy reaches the maximum possible value H α,max = ln(10 2 ) = 4.60517, for all α, whilst the entropy of the quadrat-based distribution for the white noise intensity is always close to this maximum. On the other hand, the LMC complexity of the pattern realizations, in both cases, is close to the minimum value 0; see Figure 8. This is expected since H ∼ H max and D ∼ 0. However, in terms of this complexity measure, the pattern structuring is still slightly higher in the white noise intensity case. The structuring effect from the additional source of variability present in the generation of the point patterns is also assessed by the local discrepancies measured between the point pattern and intensity quadrat-based distributions, as shown in Figure 9 for Kullback-Leibler divergence, Figure 10 for Rényi divergence, and Figure 11 in terms of the generalized relative complexity. The initial variability present in the white noise intensity field is reflected as well in the generation of the patterns. Marginal Analysis The behavior of entropy is highly influenced by the isolated variation in each parameter. The increase in the variance parameter results in clusters with a high concentration of points, which implies a decrease in entropy values as the distributions move further away from equiprobability (Figure 12a). Large values of the variance parameter increase the local variability in the magnitude of clustering. The increase in the mean parameter is reflected in a larger mean intensity of the process, which results in an exponential increase in the mean number of points in the pattern. As a result, while the entropy values of the quadrat-based probability distribution from the intensity field essentially remain at a constant level below H max (related to the specification of a fixed variance σ 2 = 1), the entropy values for the corresponding patterns progressively tend to increase from below this level (Figure 12b). The ν parameter controls the differentiability of the intensity field and determines its smoothness. In Figure 12c, the entropies of the two phases of the processes show a slight gradual decrease as the parameter increases, while the differences between them remain steady. This is partly influenced by the adopted box size, taking into account the local nature of the smoothness effect. The scale parameter ρ measures the decay of covariance in the intensity field as a function of the distance. The covariance decays more slowly as ρ increases, which results in more homogeneous fields, limiting the formation of clusters. As a result, larger values of entropy are obtained for the intensity fields as the scale parameter increases, whilst, for the point patterns, we can observe an increase in the variability of the measure (Figure 12d). The maps of Rényi entropy (Figures 13-16) show the evolution of the global heterogeneity in each scenario for different magnitudes of the deformation parameter, which produces a distortion effect on the probability distributions. The behavior of the processes in terms of the exponential LMC complexity (Figure 17) reflects, in general, an increase in structuring from the intensity field to the generated point pattern (the inversion observed in Figure 17a in relation to the variance parameter is related to the quadrat-counting procedure and the low mean value specified). The maps of the generalized complexity ( Figure 18) show the structural variation in each system for different combinations of the deformation parameters α and β. Joint Analysis In this section, we focus on the divergence-based measures that allow the local comparison of two probability distributions in each possible state of the system. In particular, we aim to compare the distributions of the intensity fields and the point patterns to asses the state-by-state information transfer, and hence structural contribution, between the two phases. As we have seen in the previous section, Shannon entropy for the distributions of intensity fields and corresponding point patterns in Scenarios 1 and 2 (Figure 12a,b) converges globally as the variance and mean parameter values increase, respectively. The Kullback-Leibler divergence shows that there exists as well increasing local coherence in the transfer of information (Figure 19a,b). On the other hand, in Scenario 3, divergence essentially remains steady with respect to changes in the smoothness parameter (Figure 19c), which suggests that, under the quadrat-based approach adopted, the structural transfer is not overly sensitive to variations in the smoothness parameter. In Scenario 4, the loss of the structure in the intensities induced by high magnitudes of the scale parameter increases the stochasticity in the generation of the patterns in the second phase ( Figure 19d). Table 1). The maps representing the Rényi divergences in Figure 20 allow us to visualize how the distortion induced by the deformation parameter is reflected in the structural local departure of the point patterns from the intensity fields. Similarly, those referring to the generalized relative complexity show the sensitivity of Rényi divergences with respect to incremental changes in the deformation parameter (Figures 21-24). Discussion As a result, from the study performed, it can be emphasized that the information transfer and, in general, the structure of the processes are mainly determined by the values of the mean, variance and scale parameters. Conversely, the value of the smoothness parameter does not have a perceptible effect on the structural information transferred between the phases. In particular, among other aspects, the analysis shows a significant increase in the system complexity and a loss of diversity for large magnitudes of the variance parameter (both marginally, for the intensity field and point pattern). Regarding the local coherence measured in terms of divergence, we can observe an enlargement of the structural information transferred as a result of the increase in the mean and variance parameter values. On the other hand, the increase in the scale parameter results in the loss of structure of the intensity field, raising the stochasticity inherent to the pattern generation, with an associated increase in the divergence between both phases. Conclusions An assessment focused on the relevance of using entropy, divergence and complexity measures for the evaluation of the global and local structural information transfer from the intensity fields to the point patterns of log-Gaussian Cox processes is presented. Maps of generalized ordinary and relative information complexity measures are derived, showing the sensitivity of the distributions involved in relation to both stochasticity phases, with respect to the deformation parameter under different scenarios. In general terms, and depending on the specific case, the transfer of structural information from the intensities to the subsequent point patterns is quantified by the information and complexity estimates, reflecting as well the contribution of the additional source of variability involved in the second step. Among other relevant lines for continuing research, the study performed motivates the subsequent formal investigation of several analytical aspects involved in the structural complexity of log-Gaussian Cox processes. Multifractal Cox processes [29] constitute an important extension for analysis based on the connection between information complexity measures and multifractal dimensions [16]. In the spatiotemporal context, implications with reference to predictive risk evaluation and mapping [30], regarding the information complexity characterization of different scenarios, are also under development by the authors. A further insight to be considered in future research is quantifying the information transfer from individual external covariates and providing an inferential framework to attach significance to this sort of statistical testing. In this paper, we have been restricted to a descriptive analysis, but the upgrade into the inferential context offers a natural motivation for the continuation of the study presented. Future developments also include using and comparing alternative models under the information complexity approach. The development of a model classification in terms of these measures would be worth exploring. Author Contributions: The three authors contributed equally to all aspects of this work. All authors have read and agreed to the published version of the manuscript.
6,997.2
2021-08-31T00:00:00.000
[ "Mathematics", "Physics" ]
Morphological phenotyping of mouse hearts using optical coherence tomography Abstract. Transgenic mouse models have been instrumental in the elucidation of the molecular mechanisms behind many genetically based cardiovascular diseases such as Marfan syndrome (MFS). However, the characterization of their cardiac morphology has been hampered by the small size of the mouse heart. In this report, we adapted optical coherence tomography (OCT) for imaging fixed adult mouse hearts, and applied tools from computational anatomy to perform morphometric analyses. The hearts were first optically cleared and imaged from multiple perspectives. The acquired volumes were then corrected for refractive distortions, and registered and stitched together to form a single, high-resolution OCT volume of the whole heart. From this volume, various structures such as the valves and myofibril bundles were visualized. The volumetric nature of our dataset also allowed parameters such as wall thickness, ventricular wall masses, and luminal volumes to be extracted. Finally, we applied the entire acquisition and processing pipeline in a preliminary study comparing the cardiac morphology of wild-type mice and a transgenic mouse model of MFS. Introduction Transgenic mouse models have proven to be invaluable in research to understand the etiology behind a wide variety of cardiovascular pathophysiologies. The changes in cardiac morphology and functioning in response to targeted gene manipulations can provide insight into the origin and progression of cardiovascular dysfunction. For example, mouse models have been beneficial in elucidating the etiology behind Marfan syndrome (MFS), a multisystemic genetic connective tissue disorder that affects approximately 1 in every 5000 to 10,000 individuals. 1,2 MFS can result from numerous different genetic mutations normally in the fibrillin-1 (FBN1) gene, and has a short untreated life expectancy mainly due to its cardiovascular complications, such as aortic dilation and dissection. Using histology, researchers have been able to study the changes in aortic wall composition, structure, and size in transgenic mouse models of MFS. These histological studies provided valuable insight that linked specific fibrillin-1 mutations and the severity of the disease. [3][4][5][6] Mouse models have also been beneficial in elucidating the mechanisms behind other cardiovascular diseases, such as myocardial infarction and congenital heart disease. 7,8 Although histology has been successfully used to study some pathological changes in the heart, other structural anomalies have been difficult to visualize as the sectioning plane limits the field-of-view. In these cases, a global, three-dimensional (3-D) image of the heart would be useful for locating structural anomalies and determining the optimal sectioning plane. 3-D visualization is also valuable as it allows researchers to study the heart macroscopically, which can yield important information about morphological abnormalities due to disease. Techniques such as episcopic fluorescence image capture allow for the acquisition of 3-D cross-sectional volumetric information at a cellular resolution, but, as with conventional histology, require destructive tissue sectioning. 9 Other techniques, such as echocardiography, cardiac magnetic resonance (CMR) imaging, and microcomputed tomography (μCT), provide 3-D datasets without microsectioning and are important in vivo medical imaging techniques. Echocardiography is the preferred imaging modality for in vivo functional imaging due to its high temporal resolution. Commercially available ultrahigh-frequency ultrasound systems (20 to 100 MHz) offer higher spatial resolution, up to tens of microns, which is sufficient to resolve many important physiological parameters, such as ventricular wall thickness and ejection fraction. However, due to the tradeoff in depth of penetration and probe frequency, most in vivo adult mouse applications are centered on using 40 to 50-MHz probes, which offer spatial resolutions on the order of 50 μm. 10 Moreover, since echocardiography cannot propagate through bone or air, images of the heart can only be acquired through specific unobstructed windows, which limit its scope. Compared to echocardiography, CMR and μCT are not hampered by imaging depth or resolution. Instead, their use in ex vivo applications is limited by a high imaging cost in the case of CMR, and a low soft-tissue contrast in the case of μCT. Although contrast-enhancement dyes can be used for μCT, detailed visualizations of smaller features, such as cardiac valves, chordae tendinae, or smaller vasculature are still hard to obtain. 9 Thus, there is a need for a cost-effective imaging modality that can image the whole mouse heart with high spatial resolution. One potential modality is optical coherence tomography (OCT). Similar to echocardiography, OCT utilizes the backreflected signal to generate depth-resolved, cross-sectional images and does not specifically require the use of tissue sectioning or exogenous contrast agents. In comparison to echocardiography, OCT provides 3-D volumetric datasets with higher spatial resolution and intrinsic tissue contrast, making it easier to visualize structures such as the coronary vasculature and the myocardial fiber bundles. Moreover, the 3-D nature of the acquired OCT datasets allows researchers to better locate and quantify both the local and global morphological changes in the heart, as, unlike histology, OCT does not require any tissue sectioning. In fact, since OCT is nondestructive, traditional histological approaches, such as immunohistochemistry, can be used as a complementary, follow-up approach to further investigate the tissue structure and composition at a higher resolution. In this report, we describe the imaging and 3-D image processing steps necessary to use OCT as a cardiac imaging modality, and then validate the approach by comparing the cardiac morphologies of wild-type (WT) mice against a transgenic mouse model of MFS. Although OCT offers micrometer-scale resolution, it has a limited penetration depth, ∼1 to 2 mm in cardiac tissue, 11,12 making it difficult to resolve internal structures such as the valves or inner surface of the walls of a mouse heart. We present a novel, multifaceted approach involving optical clearing and multiperspective imaging to increase the penetration depth of OCT without sacrificing resolution. After acquisition, the volumes were corrected for refractive distortion to facilitate the creation of a single, volumetric reconstruction of the whole mouse heart. Finally, we describe the image processing pipeline necessary to make quantitative comparisons of the cardiac morphology between multiple hearts, and present preliminary findings studying the differences in cardiac morphology between WT mice and a transgenic mouse model of MFS. Animal Preparation All experimental protocols were performed at the Child and Family Research Institute (CFRI) with approval from the Animal Ethics Board of the University of British Columbia and are in accordance with the Canadian Council on Animal Care Regulations. Three WT (Fbn1 þ∕þ ) and five MFS (Fbn1 C1039G∕þ ) mice were used in this study. All mice were 12 months of age. The founder Marfan mouse models were graciously provided by Dr. HC Dietz at Johns Hopkins University and were bred in the CFRI animal care facility. In preparation for extracting the heart, the mice were first anesthetized by 2% to 3% isoflurane with oxygen and then heparizined (100 μL, at 1000 U∕mL and 5000 U∕kg) via an intraperitoneal injection. After the disappearance of the toe pinch response, the hearts were excised via a midsternal thoracotomy. The aorta was cannulated and retrogradely perfused using a Langendorff apparatus with Tyrode's solution (4 min at 2 mL∕ min) and 4% paraformaldehyde (30 min at 2 mL∕ min) to remove the blood and fix the tissue. The fixed heart was stored at 4°C in phosphate-buffered saline. Prior to imaging, the hearts were cleared via immersion in glycerol using a graded protocol (50% and 70% for 1 day each). Data Acquisition A custom-built 1060-nm swept-source optical coherence tomography (SS-OCT) system was used to image the hearts (Fig. 1). For the light source, we used a commercial sweptsource engine (Axsun Technologies, Massachusetts) that had an effective 3-dB bandwidth of 85 nm, resulting in a calculated axial resolution of 7 μm in air. Light was focused on the sample using a telecentric scan lens (LSM04-BB, Thorlabs, New Jersey) that had an effective focal length of 54 mm, thereby yielding a transverse resolution (full-width-half-maximum) of 15 μm in air. Due to the limited depth of penetration and imaging depth of our OCT system, the whole adult mouse heart could not be imaged from a single perspective. Instead, multiple volumes were acquired at 30-deg increments, rotating the heart about its long axis across the entire 360-deg span. At each rotation, 10 volumes were acquired and averaged to further offset the loss in signal-to-noise from the telecentric scan lens and glycerol clearing. Data acquisition was performed using an open-sourced program that utilized graphics processing units (GPUs) to allow for real-time visualization of the dataset. 13 Each volume consisted of 1408 × 400 × 800 voxels, with each voxel having a physical dimension of 2.6 × 21.1 × 18.0 μm 3 . Removal of Image Distortions The volumetric images of the mouse heart acquired from multiple perspectives were combined into a single volume of the whole heart. For the remainder of this paper, the volumes taken from different perspectives will be referred to as "subvolumes," whereas the volume of the whole heart will be referred to as the "whole volume." Prior to registering and stitching the subvolumes into a whole volume, distortions within the OCT volume due to the imaging process were corrected. In OCT, image distortions arising from nonlinear scanning distortion, nontelecentric scanning distortion, and refraction cause registration mismatch and decrease the accuracy of the quantitative measurements. 14- 16 We minimized the scan-related distortions by employing a telecentric lens as our objective lens and by considering only the linear portion of the scan in our data acquisition. The refractive distortions arising from the epicardial surface of the heart were minimized in postprocessing. To correct for refraction, the refractive indices of each layer as well as the boundaries between the layers were determined. Since the heart was immersed in 70% glycerol prior to imaging, and glycerol permeates the tissue through passive diffusion, glycerol was assumed to be within the heart chambers as well. Given that the majority of the refraction occurs at the air-to-tissue interface, the algorithm only corrected for distortion due to refraction at the epicardial surface of the heart. The outermost surface of the heart was automatically segmented in 3-D using a gradient-based approach. The volumes were first despeckled using 3-D edge-preserving bounded-variation (BV) smoothing. 17 After denoising, the volumes were then convolved with 3-D Sobel filters to obtain three gradient volumes, G x , G y , and G z . The gradient magnitude, G, was then calculated for each voxel using G ¼ and then masked such that only the gradient within the tissue was nonzero For each slow-axis scan, the segmented surface was adjusted to ignore wrapping due to complex-conjugate artifact. The wrapped portion of the image was detected by taking into account the convex nature of the ventricular surface. The inflection point was detected in the segmented surface through the second derivative test. For the part of the image past the inflection point, the bottom surface was segmented and unwrapped, and the actual surface of the heart was estimated using interpolation. Using this method, the segmentation algorithm was able to ignore wrapping due to complex conjugate artifact and detect the outermost surface of the heart [ Fig. 2 After determining the co-ordinates of the top surface, we then corrected for refractive distortions using a similar approach to previously published and validated results. 15,16 The refraction correction was based on the vector form for Snell's law, given bỹ where n i is the refractive index of the i'th material,Ṽ 1 andṼ 2 are the incident and refracted rays, andÑ is the surface normal. 15,18 Since we used a telecentric lens for the objective lens, we assumed thatṼ 1 ðx; y; zÞ ¼ h0;0; 1i, where z denotes depth, and x, y denote the lateral position of each A-scan. The surface normal,Ñ, was computed by taking the cross product of the horizontal and vertical surface gradients. The length of the refracted ray (optical path length) was also scaled by the refractive index. Once the refracted ray was calculated, the image was then dewarped. For a given A-scan, the position of each voxel, Pðx; y; zÞ, is given by Pðx; y; zÞ ¼ Ṽ 2 ðx; y; zÞ ifṼ 1 ðx; y; zÞ > Sðx; y; zÞ V 1 ðx; y; zÞ otherwise ; (2) where S is the location of the segmented top surface, and P is the adjusted co-ordinate of the voxel. After calculating P, we interpolated the scattered data to find the one-to-one mapping between the refraction-corrected and original volume, and Volumetric Registration and Stitching The refraction-corrected subvolumes were registered together to form a whole volume. 3-D rigid registration, with 6 degrees-offreedom, was chosen to avoid introducing nonphysical distortions from nonrigid algorithms. Registration was performed using Amira (FEI, France), a commercial 3-D imaging analysis and processing software. To register multiple subvolumes together, we registered each volume to multiple template volumes. The volumes acquired from 90-deg perspectives (at 0 deg, 90 deg, 180 deg, and 270 deg) were first pairwise registered to form the skeleton of the cardiac geometry. The remaining volumes were then registered to their two closest neighbors; for example, the volume acquired at 30 deg was registered to the volumes acquired at 0 deg and 90 deg. In this manner, the registration errors were divided across the whole volume. After registration, the subvolumes were stitched together to form a whole heart volume. Although the refraction correction procedure minimizes geometric distortion, small mismatches in the registered volumes due to residual distortions and registration errors remain. These small mismatches would result in a blurring of high frequency information if the volumes were stitched using simple averaging. To prevent the decrease in resolution associated with averaging, we implemented a 3-D version of multiband blending, whereby lower frequency information was averaged over a wider area, whereas high frequency information was averaged over a narrower region. 19,20 The contribution of each volume was determined by calculating a priority function. For a set of i ¼ 1: : : N subvolumes, a priority function P i was calculated, where data with higher fidelity were given higher weight. The priority function was calculated using where x; y are the lateral scan positions, and x c and y c are the centers of the lateral scans. The stitching was performed in an iterative fashion, with one volume added to the final, stitched result per iteration. For more details as to how the priorities were used to blend the volumes together, please refer to the algorithm published by Brown and Lowe. 20 Figure 3 shows the contribution of each subvolume to the final, whole volume for a single, representative short-axis slice. Figure 3(c) shows the result of stitching the first two volumes, with the contribution of each volume shown using overlaid color. Figure 3(d) shows the final stitched result, with the contribution from each volume shown in Fig. 3(e). Although each slice is a composite from multiple subvolumes, registration and stitching artifacts are not easily distinguishable within the final result. The stitched result had a voxel size of 20 × 20 × 20 μm 3 . Morphometric Analysis Prior to performing the morphometric analysis, the atrioventricular valves and ventricular chambers and lumen were first segmented in each of the whole heart volumes. Figure 4 shows the steps of the semiautomatic segmentation process on two representative slices, a short-axis slice and a slice that shows the four chambers of the heart. Using Amira, the volume was first automatically thresholded using a hysteresis-based region-growing algorithm. Although most of the cardiac tissue was automatically segmented, deeper regions that had greater signal attenuation were not always detected and required manual refinement [ Fig. 4(b)]. Next, the atrioventricular valves, ventricular walls, and ventricular lumens were manually labeled in Amira [ Fig. 4(c)], and the ventricular masses were calculated by considering the voxel dimensions, the number of voxels assigned to the ventricular wall, and the specific gravity of myocardium (1.055 g∕cm 3 ). The luminal volumes were quantified in a similar manner. After segmentation, the inner and outer boundaries of the ventricular walls were delineated. The trabeculae, which are muscular protrusions located on the endocardial surface of the heart, were removed using morphological closing to compute the thickness of the myocardium. After removing the trabeculae, the center of the left ventricle (LV) was estimated using a center-of-mass algorithm, and the image was unwrapped from that point. The boundaries of the right and left ventricles were then detected in the unwrapped image and converted back to Cartesian co-ordinates. The detected surfaces were spline-fitted to smooth the surface and resample it evenly. Spline fitting was performed by fitting the arc length of the detected surface to a spline function. Figure 4(d) shows the result of delineating the inner and outer surfaces of the right and left ventricular walls. The wall thickness at each point was then determined by defining the thickness to be the distance from the outer to inner wall in a direction perpendicular to the outer wall. Realignment The hearts were aligned to a standard orientation in order to facilitate comparison of similar regions across the sample set. Following conventions in the field of cardiac imaging, we aligned the vertical axis to the long axis of the left ventricle. The long-axis was estimated by modeling the left ventricle as an ellipsoid. [21][22][23] The circumferential orientation was standardized by finding the mean direction of the right ventricle relative to the left ventricle. Figure 5 presents a representative four-chamber slice from an OCT dataset of the whole heart. Structures within the heart such as the interventricular septum and the atrioventricular valves can be visually identified. Details within the myocardial wall, such as coronary vasculature (blue inset) and myofibril bundles (green inset), are also visible. Video 1 provides a fly-through movie of a representative whole heart volume. To quantify the registration error, we analyzed the root mean square error (RMSE) between the physical rotation applied during image acquisition, and the rotation that was found using the registration process. The mean RMSE for the four hearts was 0.72 deg AE0.12 deg. Since the rotation mount we used to rotate the cannula had a tolerance of 1 deg, our registered rotation agreed with the actual rotation. Figure 6 demonstrates the efficacy of glycerol clearing on increasing depth of penetration. At 0% glycerol, the endocardial surface of the left ventricle was not visible in the OCT B-scan. Effect of Glycerol Concentration of Depth of Penetration With the addition of glycerol, the OCT light source was able to penetrate the left ventricle completely, and details within the endocardial surface became apparent in the acquired image. Increasing the glycerol concentration increased the visible depth in the OCT imaging, but also decreased the contrast of the smaller features. We chose to use 70% glycerol as a compromise between the depth of penetration and the contrast. Effect of Refraction Correction The effect of refraction correction on the similarity of the contributions from the individual volumes is shown in Fig. 7, where three of the registered volumes have been placed in different RGB color channels. Refraction correction minimizes the artifactual differences in tissue thickness, and increases the coincidence of tissue structures across the individual volumes [Figs. 7(c) and 7(d)]. Morphological Comparison of Multiple Hearts To demonstrate the potential of using OCT in studying differences in cardiovascular morphology, we imaged and compared the hearts of three WT and five Marfan mice. Table 1 presents the physical characteristic of the mice, reported as mean AE SEM. The body weights, ventricular volumes, masses, and ventricular mass-to-volume ratio were compared using a paired t-test, with significance set at p < 0.05. The luminal volumes and masses were normalized by body weight. Comparing the WT and Marfan mouse model, the MFS mice exhibit significantly smaller mass-to-volume ratios (1.00 AE 0.01 versus 1.24 AE 0.06 mg∕μL) and larger normalized LV volumes (3.01 AE 0.21 μL∕g versus 2.24 AE 0.18 μL∕g), in MFS versus WT, respectively. The thickness of the LV was computed across the entire wall and normalized by body weight to minimize the potentially conflicting effect of heart size. 24 Figure 8 compares the normalized wall thickness between the Marfan and WT mice. The thickness at a particular point has been displayed as a heat map, where blue represents the thinnest portions and red represents the thickest portions. The right ventricle has been show in gray for the inferior and anterior views for orientation purposes. For display purposes, the LV wall was divided into four myocardial segments: anterior, lateral, septal, and inferior, as shown in Fig. 8. Comparing the WT and Marfan mice, the MFS mice exhibit slightly thicker walls relative to their WT counterparts. Discussion In this report, we demonstrated the feasibility of using OCT for phenotyping adult mouse hearts. Although OCT has been proven to be useful in characterizing the structure and function of embryonic hearts, the same demonstration has not been made yet in imaging the whole adult heart. 9,25,26 Due to the limited depth of penetration (∼1 to 2 mm in cardiac tissue) and trade-off between the depth of focus and spatial resolution, conventional OCT is unable to image the entire adult mouse heart with high resolution. We bypassed the depth limitation by implementing multiperspective imaging and optical clearing. These subvolumes were then refraction-corrected to remove distortions that were introduced by the image acquisition process. We demonstrated the impact of refraction correction on minimizing registration mismatches between the subvolumes. We also performed the first study comparing the cardiac morphology of WT and Marfan mice using OCT. We only quantified the global morphological changes since the focus of this project was in the development of OCT as a tool for cardiovascular phenotyping, and because other existing modalities, such as echocardiography and histology, are already able to evaluate changes in other parameters that are also relevant to MFS, such as the aortic root dimensions. Our results indicated that the Marfan mice exhibit significantly decreased left ventricular mass-to-volume ratio than their WT counterparts. The smaller mass-to-volume ratio is primarily due to the change in volume: when adjusted for body weight, the Marfan mice had significantly increased left ventricular volume, with little change in LV mass. Qualitative comparison of the normalized LV thickness also suggests that the Marfan mice may have slightly thicker walls than age-matched WT controls. The presence of dilated volumes with a near-normal wall thickness is suggestive of a dilated cardiomyopathy (DCM) phenotype, which is in agreement with previously reported findings on both humans and mouse models of MFS. [27][28][29] However, it is not clear whether DCM is a primary finding, or secondary to abnormal hemodynamic loading conditions from other cardiovascular complications. [27][28][29][30][31][32][33][34] One potential conflicting factor in our study was the impact of glycerol on tissue shrinkage. Glycerol has been shown to shrink tissue due to dehydration in a manner similar to airimmersion. 35 Moreover, as primary cardiomyopathy is still a debatable finding in MFS, we hypothesize that any glycerolinduced differences may be smaller than the pathophysiological differences. To the best of our knowledge, this is the first demonstration of OCT for the cardiovascular phenotyping of adult mouse hearts. Unlike echocardiography, OCT is able to provide volumetric images of the heart at a much higher spatial resolution and is better able to visualize small structures such as fiber bundles and coronary vasculature. The high-resolution, 3-D nature of the acquired datasets allows researchers to better locate and study small changes in the heart morphology, since they provide both a global frame of reference, and a local, high-resolution visualization of the structures of interest, from which any number of planes can be virtually sectioned. Moreover, due to the nondestructive nature of OCT, specific areas identified in the dataset can be further isolated and studied using other complementary techniques, such as immunohistochemistry. The analyses presented in this paper serve as a proofof-concept for the use of OCT in quantifying changes in cardiac morphology. Using OCT, we were able to find significant differences in cardiac morphology between WT and Marfan mice. The morphological analyses can be further improved through the application of more sophisticated computational anatomy tools, including nonrigid registration, which would allow researchers to study subtle, localized changes in morphology, such as valve thickness and local wall thickness, in a nondestructive manner. We developed the methodology using fixed hearts as a sample, but this approach can also be extended to study cardiac contractile function by imaging ex vivo live hearts by performing gated OCT with faster light sources. 36,37 Michelle Cua received her BSc degree in kinesiology, and her BASc (Honors) and MASc degrees in biomedical engineering from Simon Fraser University (SFU). Her research is currently focused on the development of biomedical imaging technologies, including OCT, adaptive optics (AO), and microscopy, to better visualize and study the changes in retinal and cardiac morphology in response to pathological and environmental conditions. Eric Lin received his PhD degree in physiology from University of British Columbia. He is investigating how hearts change their electrical properties to change their mechanical function, which allows them to function over a range of physiological conditions. Electrical and mechanical events in the heart are coupled by changes in intracellular calcium levels, and simultaneous voltage and calcium measurements are of interest, studied by combining traditional ECG and microelectrode techniques with modern optical mapping imaging techniques. Ling Lee received her MSc degree in biomedical physiology and kinesiology (BPK) at SFU in 2014, and is currently a research assistant in Dr. Glen F. Tibbits' lab at the Child and Family Research Institute. Her graduate thesis involved evaluating the functional and structural cardiac properties of transgenic mouse models of Marfan syndrome in vivo using echocardiography. Xiaoye Sheng received her MSc degree in kinesiology at SFU in 2005, and is working as a research technician for Dr. Glen F. Tibbits at the Child and Family Research Institute in Vancouver, BC. She is involved with echocardiography of Marfan mice, the study of ischemia reperfusion injury on neonatal hearts, and single cell calcium imaging. Kevin S. K. Wong received a BASc (Honors) degree in biomedical engineering at SFU in 2013, and is pursuing a master's degree with the Biomedical Optics Research Group at SFU. His graduate research concentrates on GPU computing for real-time OCT imaging and various applications of OCT, including speckle variance OCT, compressive sampling OCT, and wavefront sensorless adaptive optics OCT. Glen F. Tibbits is a professor and chair in the Department of BPK at SFU, and a Tier 1 Canada Research Chair in Molecular Cardiac Physiology. His research interests focus on investigating the molecular and cellular mechanisms behind cardiac contractility and cardiac adaptation to various physiological, pathological, and environmental conditions. Mirza Faisal Beg is a professor in the School of Engineering Science at SFU, and a Michael Smith Foundation for Health Research (MSFHR) Scholar. His research interests include signal and image processing, and the development of computational anatomy tools to study changes in anatomy (brain, retina, and heart) in response to diseases and treatments. Marinko V. Sarunic is an associate professor in the School of Engineering Science at SFU and a MSFHR Scholar. His research interests in biomedical imaging include OCT, AO, and the use of GPUs for real-time processing and display.
6,277.6
2014-11-01T00:00:00.000
[ "Biology" ]
Setup in a clinical workflow and impact on radiotherapy routine of an in vivo dosimetry procedure with an electronic portal imaging device High conformal techniques such as intensity-modulated radiation therapy and volumetric-modulated arc therapy are widely used in overloaded radiotherapy departments. In vivo dosimetric screening is essential in this environment to avoid important dosimetric errors. This work examines the feasibility of introducing in vivo dosimetry (IVD) checks in a radiotherapy routine. The causes of dosimetric disagreements between delivered and planned treatments were identified and corrected during the course of treatment. The efficiency of the corrections performed and the added workload needed for the entire procedure were evaluated. The IVD procedure was based on an electronic portal imaging device. A total of 3682 IVD tests were performed for 147 patients who underwent head and neck, abdomen, pelvis, breast, and thorax radiotherapy treatments. Two types of indices were evaluated and used to determine if the IVD tests were within tolerance levels: the ratio R between the reconstructed and planned isocentre doses and a transit dosimetry based on the γ-analysis of the electronic portal images. The causes of test outside tolerance level were investigated and corrected and IVD test was repeated during subsequent fraction. The time needed for each step of the IVD procedure was registered. Pelvis, abdomen, and head and neck treatments had 10% of tests out of tolerance whereas breast and thorax treatments accounted for up to 25%. The patient setup was the main cause of 90% of the IVD tests out of tolerance and the remaining 10% was due to patient morphological changes. An average time of 42 min per day was sufficient to monitor a daily workload of 60 patients in treatment. This work shows that IVD performed with an electronic portal imaging device is feasible in an overloaded department and enables the timely realignment of the treatment quality indices in order to achieve a patient’s final treatment compliant with the one prescribed. Introduction abdomen, and pelvis) with a scheduled treatment of more than 20 fractions. VMAT and IMRT treatments were carried out with one of the treatment planning systems (TPSs) available in our department, including the Oncentra Masterplan 4.3, Monaco version 3.0 and 5.0 (Elekta Stockholm, Sweden), Pinnacle 3 TM Version 9.10 (Philips Medical Systems, Eindhoven, the Netherlands). The VMAT plans were performed with one or two arcs whereas the IMRT plans had five to nine beams delivered via a step-and-shoot technique. At least five IVD tests were scheduled for each patient while 25 to 45 fields were tested for each IMRT patient and five to ten fields were tested for each VMAT patient. Table 1 shows the distribution of the IVD tests versus the TPS used and the adopted technique. The patients were immobilized using personal thermoplastic masks to cover the treatment site and they were allowed to wear thin clothes over the skin for the upper or lower portion of the abdomen and for pelvis and breast treatments. Signs for patient repositioning were drawn onto the mask. CBCT was performed during the first therapy session, referred to in this study as the reference fraction, and then twice a week or before a repeated IVD test after a correction. The couch was moved into the correct position after the CBCT alignment process and the maximum accepted displacements on at least one of the x, y, or z directions following the procedures adopted in our department were ± 5 mm for the pelvis, abdomen, breast, thorax, and ± 3 mm for the H&N. A pre-treatment verification was performed before the beginning of the treatment, comparing the beam x-ray fluence measured by a 2D-array (MatriXX Evolution, IBA Dosimetry GmbH, Schwarzenbruck, Germany) and that computed by the TPS. The study was reviewed and approved by the Ethics Committee of Sichuan Cancer Hospital. IVD procedure The dedicated software SOFTDISO (Best Medical Chianciano, Italy) version 1.24 for EPID in vivo dosimetry was used [23,25]. The software based on a Si-EPID, can be commissioned evaluating the following parameters for each photon energy: the beam quality index TPR 20 10 (tissue phantom ratio), the absolute dose (cGy/MU) under reference conditions, a calibration factor k s for the EPID, and its linearity within the range of the monitor unit (MU) that was used [24]. The first two parameters were obtained during the linac commissioning following IAEA TRS-398 [26] while few measurements were performed to characterize the EPID response. The constancy of the EPID calibration factor k s was added to the quality controls of the linac on a weekly basis while the linearity with the MU was inserted in the annual controls [24]. SOFTDISO was connected to the different TPSs to receive DICOM computed tomography (CT) images and the RT plan; the EPID database was used to receive the images acquired during the administration of the treatment. The transfer of the DICOM RT plan and CT scan from the TPS must be performed manually, as must the preparation of the patient in SOFT-DISO. The images were transferred to SOFTDISO and automatically evaluated. The software Setup in a clinical workflow and impact on radiotherapy routine of an in vivo dosimetry procedure uses a dosimetric method implemented by Piermattei et al. [23,24] that provides two types of IVD tests: the ratio R = D iso / D tps between the reconstructed (D iso ) and the planned (D tps ) isocentre dose and a γ-analysis obtained between the first EPID image, or reference image (obtained at the reference fraction), and the subsequent images acquired during the course of treatment. The first test is representative of the accuracy of the dose reconstructed at a reference point while the second test is the γ-analysis that supplies a transit dosimetry to verify the treatment reproducibility. Both tests supply useful information about the presence of dosimetric errors due to the patient setup, linac output variations, beam interruptions, dose calculations [16,22], and the presence of patient morphological changes [3,27]. The in vivo EPIDbased dosimetry workflow applied in this study is shown in Fig 1. The index R ratio between the reconstructed and planned isocentre dose is considered in tolerance when 0.95 R 1.05. This range takes into account the propagation of the uncertainties of the SOFTDISO reconstruction algorithm for D iso and the TPS reconstruction algorithm for the calculation of D tps [3,17,22,23,24,27]. The global γ-analysis [28] between the reference EPID and the current images uses two gamma parameters: 1) the EPID percentage signal to agreement ΔS%; and 2) the distance to agreement Δd (mm) [22,25]. We adopted ΔS% = 3% and Δd = 3 mm for the H&N treatment and ΔS% = 5% and Δd = 5 mm for all other treatment sites. In particular, Δd values were selected to be equal to the maximum displacement acceptable in the clinical practice, while ΔS values were defined by taking into account the presence of heterogeneous tissues, dose gradients, and mobility of the irradiated organs. Following partly the indications for the γ-analysis performed for the patient pre-treatment verification, two tolerance levels for the indices were selected: 1) the percentage γ-index γ% ! 90% (i.e. the percentage of points with γ < 1 that must be greater than 90%) and 2) the mean γ value γ mean 0.4. Therefore, within the EPID irradiated area, a maximum of 10% of the points in disagreement were accepted while the weight of the discrepancy was given by the distribution of γ values, which was characterized by a mean value lower than 0.4. The ratio R = D iso /D tps between the reconstructed (D iso ) and the planned (D tps ) isocentre dose, the percentage gamma index γ%, and the mean value of gamma γ mean were evaluated with SOFTDISO using the images of the electronic portal imaging device (EPID) and the data from the TPS and the IViewGT. In summary, one test T was defined by the results obtained for the indices R, γ% and γ mean for each patient and for each beam of the therapy session. In this way, an IVD test warning started when at least one of the three indices was out of tolerance. The aim of the corrective action was to reach values of the three average indices that were in tolerance for each patient (after all the IVD tests), i.e. values of " R within 5% of, " g% ! 90%, and " g mean 0:4. In particular, a mean value R beam was obtained for each beam by averaging the values of R obtained for this beam on different days; therefore, the value of " R for a patient was obtained as the average of the mean ratio R beam . The indices " g and " g mean were obtained with the same modality. A possible deviation from the planned conditions could be present in the first reference image. The use of pre-treatment verification enables us to exclude deviations in the x-ray fluence. Moreover, the CBCT carried at the reference fraction (and subsequently twice a week), can intercept a deviation from the planned conditions due to morphological changes. In case of a deviation, the reference fraction will be chosen as the subsequent fraction and the reference image is acquired in concomitance with a new CBCT. During the therapy fractions, deviations that can occur (as the unexpected presence of attenuators on the beam) are not taken into consideration in the TPS and are easily intercepted by the indices R and γ. The time needed for each step of the procedure was registered. Management of indices outside tolerance levels The results of the three indices were displayed for every beam and every fraction on the main screen of the software. In the case of a warning (i.e. when one of the three indices was outside its tolerance level), the cause was investigated first by an experienced medical physicist and subsequently by a radiation oncologist in order to decide the correction to be performed. In this case, another IVD test was performed the following day in concomitance with a new CBCT to verify the effect of the correction. The R ratios obtained for different fractions and the tolerance threshold (0.95 R 1.05) displayed on the main screen of SOFTDISO enable immediate identification of an off-tolerance level (OTL) to investigate a trend that slowly leads to R values beyond acceptable thresholds or simply an acquisition error. The possible reasons of an OTL for the γ index can be identified by comparing the inline and crossline signal profiles of the EPID images acquired for different fractions. The γ-analysis reported with a map of the points with γ > 1 over the digital reconstructed radiography (DRR) in its different projections is also a helpful tool to identify the possible causes of discrepancy. Fig 2 reports an example of IVD test results as displayed on the main screen of SOFTDISO for a 102˚beam entry of an H&N IMRT treatment. In this case, the γ-analysis of the 5th fraction shows a hot dose region located in correspondence with the patient's shoulders. The comparison (Fig 2(B)) of the green EPID signal profile of the current fraction with the red profile acquired at the first fraction (reference EPID image), shows a large discrepancy. Moreover, the map of the points with γ > 1 (Fig 2(G)) over the digital reconstructed radiography (DRR) indicates a possible shift of the patient's shoulders. In this case, the patient setup was corrected by a new CBCT and the new IVD test yielded a tolerable value of γ% = 98%. Some IVD tests with OTLs were due to the incorrect positioning of the flat panel, a wrong set up of the acquisition parameters in the iView system, or a lack of synchronization between the image acquisition and delivery (images partially acquired). These cases had high OTLs and were easily identified by observing the results displayed on the main screen of the software (Fig 2(D)). These tests were excluded from the IVD analysis. From the IVD results, we were able to distinguish two classes of errors referred to as class 1 and class 2. In class 1 errors, OTLs were due to inadequate standard quality controls as defined by the AAPM Task Group 142 [29]. Inadequate controls were the causes of errors in the patient setup (including the accidental presence of attenuators on the beams such as the edge of the beam couch not taken into account by the TPS). In class 2 errors, OTLs were due to patient morphological changes such as tumour shrinkage and loss of patient weight, i.e. all causes that generally require patient morphological controls that were distinguished from the technical and dosimetric controls in this study. IVD tests The results of our study are reported in terms of the percentage of IVD tests, T%, with indices R, γ%, and γ mean within tolerance levels and percentage of patients, P%, with mean values of the three indices " R, " g%, and " g mean within tolerance levels. A total of 15% of the EPID images acquired presented artefacts due to errors during the acquisition process and therefore had very high OTLs. These IVD tests are not included in the reported results. Setup in a clinical workflow and impact on radiotherapy routine of an in vivo dosimetry procedure From the analysis of the IVD tests, we can summarize that class 1 errors accounted for 90% of the OTL tests in this study. The results of 3682 IVD tests carried out for 147 patients undergoing IMRT and VMAT treatments are listed in Tables 2 and 3 in terms of P% and T% with indices within the tolerance levels. The results in Table 2 indicate that for every treatment site and technique adopted, the percentage of patients with values of " R, " g%, and " g mean within tolerance levels was 100% for the majority of the treatments performed with the exception of the IMRT treatments of the breast and thorax, which respectively exhibited P%ð" g%Þ and P%ð" g mean Þ values of 75% and 78%, and for the VMAT treatment of the thorax for which P%ð" g%Þ was 78%. The results in Table 3 show that T% for R values within the tolerance levels is about 90% for the treatment of the abdomen (IMRT and VMAT), breast (VMAT), thorax (IMRT) and H&N (IMRT), and over 95% for the remaining treatment sites and techniques. The values of T% are spread in a wide range: from 75% to 95% and from 78% to 100% for γ% and " g mean indices within tolerance levels, respectively. Breast and thorax were the two treatment sites for which the lowest percentages were obtained, i.e. 75% for breast and thorax (IMRT), 76% for thorax (VMAT). In general, γ% values were clustered around γ mean . In Fig 3, the results of the γ-analysis for IMRT and VMAT treatments are displayed for all the treatment sites. Class 1 errors were particularly important for the breast and thorax areas, for which T% was sensibly lower than for other treatment sites. Setup in a clinical workflow and impact on radiotherapy routine of an in vivo dosimetry procedure Workload The daily data acquired and processed by SOFTDISO were evaluated the following morning. The OTL tests were analysed and corrected for the subsequent session that was on the same day for the treatments scheduled in the afternoon or night, and the next day for the treatments scheduled in the morning. Considering a mean workload of 60 patients per day per linac and two IVD checks per week per patient, an average number of 24 patients per day per linac were scheduled for IVD screening. Table 4 shows an analysis of the time required to perform the entire procedure for 24 patients, six of whom recently scheduled. The daily mean values relative to the number of patients tested daily and the relative mean number of tests, the time needed to import the EPID images into SOFTDISO, and the computation time required for the IVD tests were registered. For new patients starting with the IVD procedure, the time to export the patient's data (DICOM RT plan and the CT scan) from the TPS to SOFTDISO and the time of their commission was added. The mean total time required for the entire procedure is 67 min, i.e. less than 3 min per patient. Considering that the results of the patients without OTLs were directly stored, the operator can only examine an average of 30% of OTL tests (i.e. those with class 1 and class 2 errors). Thus, the mean computation time was reduced by at least 70% and the overall mean computation time was reduced from 37 to 12 min. Moreover, the overall mean time was Setup in a clinical workflow and impact on radiotherapy routine of an in vivo dosimetry procedure reduced from 67 to 42 min (i.e. less than 2 min per patient). The workload for the discussion between medical physicists, radiation oncologists, and therapists pertaining to the OTL tests requires a variable amount of time depending on the complexity of the causes. Discussion The overall number of IVD tests performed for the two different techniques and five treatment sites enable us to formulate some considerations. In the present study, approximately 90% of the OTL tests were due to class 1 errors and were corrected with a systematic check of the patient setup, immobilization system, and alignment, since no errors due to the quality control of the linac, couch, or lasers were found. The rapid computation of the indices enables us to assure an adequate number of IVD tests for each patient. Determining the causes of errors for each OTL index, and adopting the appropriate corrections, the successive IVD tests guaranteed at the end of the treatment course a re-alignment of the average index within the tolerance levels at the end of the treatment course. The remaining 10% of the OTL tests were class 2 errors and, therefore, they were followed individually even for pelvis and abdomen (gas pocket) areas; these errors were adjusted by pushing the patients to follow the indications received during the planning CT pertaining to the daily preparation (e.g. diet, bladder filling, and empty rectum). As expected, the comparison between Tables 2 and 3 highlighted that the percentage of patients P% with indices within the tolerance levels was in general higher than the percentage of tests T% in tolerance. This shows that the effect of the corrections was evident for all the treatment sites and P% values were equal to 100% with the exception of the breast (IMRT) and thorax (IMRT and VMAT) areas. The fraction of OTL tests obtained for breasts treated with IMRT was 25% and was partially due to two patients (out of eight) treated with the bolus positioned over the mask and with a mask that did not fit perfectly the body of the patients. Therefore, these patients showed daily a different configuration compared to that of the planning CT for the positioning of the bolus with respect to the treatment site and the air gap between the skin and the mask. The radiation oncologists decided to proceed with new CT scans after three repeated fractions of OTL tests. The effect of the correction was efficient for both patients during the subsequent fractions, even if the mean value of γ indices " g% and " g mean were still outside tolerance levels due to the limited number of IVD tests acquired after the correction. However, breast treatments showed a high treatment accuracy considering that 100% of patients for both techniques resulted with an " R index within tolerance. In general, the IMRT treatments resulted in a higher percentage of OTL tests compared with those for the VMAT treatments. For an IMRT treatment, each beam corresponds to an IVD test obtained by the EPID image acquired during the delivery. Some beam entries can easily lead to an OTL test if the path of the beam in the patient is varied with respect to the planning configuration, as for the two breast treatment cases described above. For VMAT treatments, the analysed EPID image is obtained by adding the signal of multiple beam entries of the arc; in this way, any inaccuracies of a specific gantry angle can be compensated in the overall arc. Moreover, the bolus automatically created by the treatment planning may lead to a discrepancy between the daily patient setup and that used for the treatment planning. The mould mask performed over the patient's clothes and not directly on the patient skin can also contribute to a lack of reproducibility in this situation. For the thorax treatments, the above consideration regarding the patient setup reproducibility remained valid. Two additional contributions to these OTL indices were: 1) the use of a reconstruction point, which can be in a high gradient region; and 2) changes in the patient's anatomy. As already underlined by Celi et al. [30], the position of the point of interest (in the heterogeneity interface, tongue and groove, and high-dose gradient) plays an important role in case of an observed dose difference. Once these aspects were taken into account, the tests confirmed a correct treatment. However, when P% values less than 100% persisted, this was due to the limited number of tests acquired after the correction. The abdomen and pelvis treated by IMRT and VMAT resulted in 100% of the patients exhibiting " R, " g%, and " g mean indices within tolerance levels. This result confirmed that each patient received a treatment that was compliant with the planned treatment course. The percentage of IVD tests with indices within tolerance levels were less favourable, as a decrement ranging from 5% to 11% was found for the IMRT treatments and a reduction of up to a 9% was found for VMAT treatments. These results were justified by the map of the points with γ > 1 over the DRR such as those observed in Fig 2(G); in this case the discrepancy was due to occasional intestine air gap and occasional different filling conditions of the bladder and rectum. H&N treatments by IMRT and VMAT techniques resulted in 100% of the patients exhibiting " R, " g%, and " g mean indices within tolerance levels. The OTL tests occurred mainly because the beam entry had a path within the nasopharyngeal air cavity. For one patient, the loss of weight required a new treatment plan involving a new CT scan and immobilization mask. Use of CBCT clearly improved the results but it was not able to correct for different patient profile shapes, densities, and depths crossed by the beams. During this start-up period, individual corrections were performed, which were primarily patient-positioning adjustments. In four cases (two breast cases presenting an imperfect fit of the mask positioning, a H&N case associated with weight loss and a thorax case without clear causes), the CBCT required by the OTLs of the IVD tests convinced the radiation oncologist to require an adaptive plan. Following the results obtained in this study, some setup procedures of our department have been revised. In particular, a breast board for the breast and thorax positioning has been adopted instead of the mould mask system. In addition, the bolus is now imaged directly at the planning CT instead of creating it using an automatic tool in the treatment planning system. There are many important and practical concerns to be addressed for a successful in vivo dosimetry procedure such as the identification of an adequate threshold (threshold with clinical relevance) for each parameter followed. Currently, researchers such as Fuangrod et al. [31] are carrying out a statistical process control to identify the most significant threshold that has to be applied in a clinical routine for each treatment site and technique. These evaluations are beyond the scope of this study. Our findings must be understood within the insightful limitation of this research. As reported by other authors [18], the isocentre can be located in a high-dose gradient region or sometimes out of the target and, therefore, it cannot have a clinical significance in some cases. Our study involved patients coming from a single institution and was not validated by a multi-institutional quality assurance program; therefore, the results cannot be generalized. However, the results show that deviations from the initial treatment conditions can arise and that these deviations can be corrected during the course of the treatment if a practical IVD procedure is adopted. Concerning the accuracy of the actual IVD methods based on EPID, it is important to remember that they are based on the photon fluence reconstruction using the CT scan used for the planning computation. This means that in case of IVD warnings, for example due to an incorrect patient setup or anatomical changes, the reconstructed photon fluence may differ from that used for the dose delivery. Therefore, the 1D, 2D, and 3D IVD tests based on the dose recalculation using the reconstructed photons fluence can present some inaccuracy. However, all the actual procedures can supply useful dose delivery warnings providing indications to prevent errors in the subsequent fractions of the treatment course. Some researchers [32] suggested the use of the CBCT scans to reconstruct the dose in patients using EPID images for the fluence reconstruction, but the CBCT image calibration methods need experimental confirmations and more automatic procedures [33]. In spite of these well-known difficulties, the intention of this work is to continue investigating the feasibility of using calibrated CBCT scans instead of CT scans to assess the dosimetric errors. Conclusions The results obtained for 147 patients highlighted that OTL tests arise during the course of treatment. In this study, over 10% of the IVD tests for the pelvis, abdomen, and H&N treatment for IMRT and up to 25% for the breast and thorax for the VMAT techniques resulted in OTLs. The timely intervention and correction of these errors allows the realignment of the quality indices within the tolerance levels, ensuring that each patient's final treatment is compliant with the prescribed treatment. Scheduling two IVDs per week per patient, an average time of 42 min per day is sufficient to monitor a daily workload of 60 patients. In summary, EPID based IVD procedure is a powerful method to monitor the treatment reproducibility and accuracy and to assess the suitability of new techniques and immobilization systems. Its use in a radiotherapy clinical workflow is feasible and has an acceptable added workload. Supporting information S1 File. Epid based in vivo dosimetry indices results. The results obtained for R, γ%, γ mean and registered in the SOFTDISO database are reported in columns G, H and I for every beam tested (column E). The anonymized patient ID, the reference plan, patient pathology, machine ID, are specified in column A, B, C, D respectively.
6,235.6
2018-02-12T00:00:00.000
[ "Medicine", "Physics" ]
The Study of Hepatitis B Virus Using Bioinformatics Hepatitis refers to the inflammation of the liver. A major cause of hepatitis is the hepatotropic virus, hepatitis B virus (HBV). Annually, more than 786,000 people die as a result of the clinical manifestations of HBV infection, which include cirrhosis and hepatocellular carcinoma. Sequence heterogeneity is a feature of HBV, because the viral-encoded polymerase lacks proof-reading ability. HBV has been classified into nine genotypes, A to I, with a putative 10th genotype, “J,” isolated from a single individual. Comparative analysis of HBV strains from various geographic regions of the world and from different eras can shed light on the origin, evolution, transmission and response to anti-HBV preventative, and treatment measures. Bioinformatics tools and databases have been used to better understand HBV mutations and how they develop, especially in response to antiviral therapy and vaccination. Despite its small genome size of ~3.2 kb, HBV presents several bioinformatic challenges, which include the circular genome, the overlapping open reading frames, and the different genome lengths of the genotypes. Thus, bioinformatics tools and databases have been developed to facilitate the study of HBV. Introduction Primarily, bioinformatics is the use of computational science to study biological and clinical data using statistics, mathematics, and information theory.This field is developing and evolving; thus, the definition cannot be precise.Moreover, the field is broad, ranging from the study of DNA and proteins, to structural biology, drug design and comparative genomics, transcriptomics, proteomics, and metagenomics.The optimization of computational technology is paramount in order to handle, store, manage, and analyze the large volumes of data generat-ed in the last decade.The data include molecular sequencing data of host and pathogen genomes and their associations to demographic and clinical records, laboratory test results, as well as information on treatment.Moreover, bioinformatics can aid in the investigation of virus-host genome and environmental interactions and in the identification of both host and viral biomarkers.This analysis can lead to a better understanding of clinical manifestation of disease and effective design of preventative and treatment measures [1]. In the first section, we describe the unique genomics and molecular biology of hepatitis B virus (HBV).Using illustrative examples, we showed how bioinformatics analyses can facilitate the understanding of the origin, evolution, transmission, and response to antiviral agents of HBV.Next, we described the bioinformatics challenges posed by HBV and present the public databases and tools currently available for the study of HBV. Hepatitis Hepatitis refers to the inflammation of the liver.A major cause of hepatitis is the hepatotropic virus, HBV.HBV infection is a public health problem of worldwide importance.Globally, 2 billion people have been exposed to this virus at some stage of their lives, and 240 million are chronic carriers of the virus [2].This infection can lead to a spectrum of clinical consequences.In the majority of cases, the infection is subclinical and transient, whereas in 25% of cases, it can cause self-limited acute hepatitis and in 1% of these progress to acute liver failure.The virus can persist in 90% of neonates and 5-10% of adults, leading to chronic infection that can progress to either chronic hepatitis or an asymptomatic carrier state.Both of these states can ultimately develop liver cancer or hepatocellular carcinoma (HCC), with or without the intermediate cirrhotic stage.Annually, more than 786,000 people die as a result of these clinical manifestations of HBV infection [3]. Prevalence The prevalence of HBV in a community can be estimated by the proportion of the population, who are hepatitis B surface antigen (HBsAg)-positive carriers.HBV prevalence varies widely in the world [3].The prevalence is low (<1%) in northern Europe, Australia, New Zealand, Canada, and the United States of America.Northern Asia, the Indian subcontinent, parts of Africa, Eastern and south-eastern Europe, and parts of Latin America are areas of intermediate prevalence (1-5%).The high prevalence areas (5-20%) include East and Southeast Asia, the Pacific Islands, and sub-Saharan Africa. Classification and structure HBV, the prototype member of the family Hepadnaviridae, belongs to the genus Orthohepadnavirus.With a diameter of 42 nm and a DNA genome of ~3.2 kilobases (kb), it is the smallest DNA virus infecting man.The genome is circular and partially double stranded.One DNA strand is complete, except for a small nick (the minus strand), and the other is short and incomplete (the plus strand).The minus strand contains four overlapping open reading frames (ORFs; Figure 1) [4] that represent: (1) the preS/S gene that codes for the envelope proteins, large, middle, and small HBsAgs; (2) the P gene for DNA polymerase/reverse transcriptase (POL); (3) the X gene for the X protein, a key regulator during the natural infection process, which has transcriptional trans-activation activity and is required to initiate and maintain HBV replication [5]; and (4) the precore/core gene that codes for the HBcAg or core protein that forms the capsid and for an additional protein known as HBeAg, which is not incorporated into the virus itself but is expressed on the liver cells and secreted into the serum.Figure 2 illustrates the structure of the hepatitis B virion. Regulatory elements of HBV Every single nucleotide of the HBV genome is necessary for the translation of a protein and may also be part of one of the regulatory elements of HBV, which overlap with protein expressing regions.The regulatory elements include the S1 and S2 promoters, which overlap both the preS region and polymerase ORFs; the preC/pregenomic promoter, which includes the basic core promoter (BCP) and overlaps the X and preC ORF; and the X promoter.There are two enhancers (enhancer I and enhancer II) as well as cis-acting negative regulatory elements (URR: upper regulatory region, CURS: core upstream regulatory sequence, NRE: negative regulatory element).These regulatory elements control transcription (reviewed in [6,7]). Replication of HBV HBV and other members of the family Hepadnaviridae have an unusual replication cycle.These DNA viruses replicate by reverse transcription of a RNA intermediate known as the pregenomic RNA (pgRNA) [8].Entry into the cell is via the sodium taurocholate cotransporting polypeptide (NTCP), a multiple transmembrane transporter predominantly expressed in the liver [9].After entry, the virion is uncoated and the core particle is actively transported to the nucleus [10], where the partially double strand relaxed circular DNA molecule is released.The single-stranded gap is closed by the viral polymerase to yield a covalently closed circular molecule of DNA (cccDNA) [11], which is the template for transcription by the host RNA polymerase II [12].The mRNAs are transported into the cytoplasm where they are translated into the seven viral proteins.In addition to being translated into the polymerase and the core protein, the pgRNA is packaged into immature core particles by the process known as encapsidation.In order to be encapsidated, the 5′ end of the pgRNA has to be folded into a particular secondary structure known as the encapsidation signal (ε) [13]. The encapsidation signal (ε) is a bipartite stem-loop structure, consisting of an upper and lower stem, the bulge, and an apical loop.Besides encapsidation, ε has a number of other functions (reviewed in [13]) and references therein.It acts in template restriction so that not any piece of RNA is encapsidated, and it also plays a role in the activation of the viral polymerase, so that there is no indiscriminate reverse transcription.It is also involved in the initiation of reverse transcription.The polymerase or reverse transcriptase acts as a primer of RNA-directed DNA synthesis by the binding of the polymerase to the bulge of ε.The first three nucleotides of the negative stand of DNA are synthesized at the bulge and are transferred to an acceptor site on the 3' end of the pgRNA, where DNA synthesis proceeds toward the 5′ end of the pgRNA [14], giving rise to the immature virion.The virus matures by acquiring its glycoprotein envelope, containing HBsAg, in the endoplasmic reticulum and is exported by vesicular transport from the cell [15]. Genotypes and subgenotypes of HBV Sequence heterogeneity is a feature of HBV, because the viral-encoded polymerase lacks proofreading ability as mentioned above [16].Using phylogenetic analysis of the complete genome of HBV and an intergroup divergence of greater than 7.5%, HBV has been classified into nine genotypes, A to I [17,18,19], with a putative 10th genotype, "J," isolated from a single individual [20].With between ~4 and ~8% intergroup nucleotide difference across the complete genome and good bootstrap support, genotypes A-D, F, H, and I are classified further into at least 35 subgenotypes [21].The genotypes differ in genome length, the size of ORFs and the proteins translated [17], as well as the development of various mutations [22].Generally, the genotypes, and in some cases the subgenotypes, have a distinct geographic distribution (Table 1).# And in regions outside Africa where there was historical forced migration as a result of the slave trade [23].¥ Vietnamese residing in Canada [24].Table 1.Comparison of the virological and clinical characteristics of the genotypes and subgenotypes of HBV ¶ . Genotyping and subgenotyping methods HBV genotypes, and in some cases subgenotypes and various mutations, can influence the clinical course of disease [22] as well as response to antiviral therapy [25] and can be used to show transmission [26] and to trace human migrations [23].Thus, HBV genotyping is becoming increasingly relevant in the clinical setting and may contribute to future personalized treatment [27] and may be important in epidemiological and transmission studies.Bioinformatics has played a major role in the development of various tools that can be used for identifying genotypes/subgenotypes and detecting various mutations.Therefore, a number of methods have been developed [28,29]. Although analysis of the HBV S gene sequence is sufficient to classify HBV into genotypes [30], the complete genome sequence provides additional information with respect to phylogenetic relatedness [31,32], including the identification of recombinants.Furthermore, even though complete genome analysis is the gold standard for genotyping, it does not allow for rapid and direct analysis on a large scale basis [17] and requires expertise and thus capacity development in computer processing coupled with phylogenetic analyses.In order to expedite and facilitate genotyping, a number of methods have been developed [17,28,29].Each one has its advantages and disadvantages [17,28,29], which should be taken into account, when selecting the genotyping method appropriate for a particular study or application. Phylogenetic analyses of HBV Although, as already mentioned, the error-prone polymerase of HBV leads to sequence heterogeneity [16], the degree, at which this can occur, is constrained by the partially overlapping ORFs and the presence of secondary RNA structures, such as ε, coded by nonoverlapping regions [33,34].The HBV genome has been estimated to evolve with an error rate of ~10 −3 -10 −6 nucleotide substitutions/site/year [35][36][37][38][39][40][41], although this rate is not constant within the different regions of the HBV genome [41].The progress of computers and information technology has played an important role in the development of phylogenetic analysis as a powerful tool in the analysis of the molecular evolution of viruses. As exemplified in the next sections, comparative analysis of HBV strains from various geographic regions of the world and from different eras can shed light on the origin, evolution, transmission, and response to anti-HBV preventative and treatment measures. Origin The origin and age of the family Hepadnaviridae remains controversial.However, until the issues with the estimation of the substitution rate of HBV [41] are overcome, the debate on the origin of HBV will continue ( [17,41] and references cited therein).Nonetheless, bioinformatics, coupled with growing number of hepadnaviral sequences in the databases, with accurate sampling times, and advances in phylogenetic and coalescent methodology [42], is beginning to shed light on this issue.For example, according to Suh and colleagues [43], analysis of the endogenous sequences in the zebra finch provides direct evidence that the compact genomic organization of hepadnaviruses has not changed during the last 482 million years of hepadnaviral evolution.Furthermore, phylogenetic analyses and distribution of HBV relics suggest that birds potentially are the ancestral hosts of the family Hepadnaviridae and that mammalian hepatitis B viruses probably emerged after a bird-mammal host switch [43]. Evolution Genetic variation is important in viral evolution.The sequence heterogeneity displayed by HBV because of the lack of proof-reading ability of the polymerase is limited by functional constraints [33], leading to non-random variation [44].Moreover, mutations can be affected by host-virus interaction and selective pressure, imposed endogenously by the immune system and exogenously by vaccination and antiviral treatment [17].Phenotypic resistance to antiviral drugs occurs because of mutations in the reverse transcriptase of POL, whereas mutations in the BCP/preC and preS regions have been implicated as risk factors for the development of HCC.Mutations in the S region coding for HBsAg can lead to both vaccine and detection escape of HBV.At any time, the virus population can be composed of a number of different mutants referred to as "quasispecies" [45].Direct sequencing and more recently next generation sequencing (NGS), parallel with bioinformatics, provide us with powerful tools to study the evolution of the various HBV mutations.NGS or ultra-deep sequencing generates large volumes of data, which can only be analyzed using bioinformatics tools and provides large coverage that can detect minor quasispecies populations of HBV [46][47][48][49][50][51] that may be important in understanding HBV pathogenicity and response to treatment.In order to minimize the number of artifactual calls of single-nucleotide variations in NGS, it is important that the correct reference sequences are used [51,52]. By designing a circular construct, Homs and co-workers [53] were able to use NGS to study evolution of both the precore and polymerase regions.They demonstrated the presence of precore mutants in HBeAg-positive phase, wild-type precore in the HBeAg-negative phase as well as lamivudine resistance strains in treatment naïve patients.This demonstrates that viral strains occurring at low frequencies can act as reservoirs or memory genomes, which are selected and evolve in response to both intrinsic (host immune response) and extrinsic (drug administration) factors. Transmission and tracing human migrations Sequencing and bioinformatics have played an important role in demonstrating transmission routes, for which previous evidence could only be anecdotal.For example, molecular characterization of HBV together with phylogenetic analysis was used to demonstrate inter-spousal transmission of HBV even after long marriages, in two Japanese patients, who developed acute liver failure [54].Similarly, the first known case of transfusion-transmitted HBV infection by blood screened using individual donor nucleic acid testing was confirmed by the 99.7% sequence homology between the complete genome sequences of the donor and the recipient HBV strains [26].When migration events were estimated by ancestral state reconstruction using the criterion of parsimony, it was shown that Africa was the most probable source of dispersal of subgenotype A1 of HBV globally and its dispersal to Asia and Latin America occurred as a result of the slave and trade routes [23,55]. Treatment response and resistance to treatment According to international chronic hepatitis B treatment guidelines, the most desirable endpoint of treatment is HBsAg loss.Following HBsAg loss, patients have better clinical outcomes, including decreased risk of developing cirrhosis and HCC, and death [56].However, the currently available treatments, which include either nucleos(t)ide analogues (NAs) for direct inhibition of the viral polymerase or pegylated interferon (PegIFN) for immunemediated HBV control, generally achieve HBV DNA suppression and HBeAg loss only, which are not enduring.In an attempt to identify viral factors associated with HBsAg loss, Charuworn et al. [57] demonstrated that viral diversity could differentiate those patients, who would lose HBsAg when treated with tenofovir disoproxil fumarate.Lower diversity was seen in the protein-encoding regions of HBV from patients who lost HBsAg compared to those who did not.On the other hand, higher diversity in regulatory elements of HBV was found to be a predictor of HBsAg loss [57].These findings need to be confirmed by studies incorporating larger numbers of patients, as well as genotypes other than A and D. The high mutation rate of HBV means that it can evolve to develop resistance against NAs that target the viral DNA polymerase.Drug-resistant mutants develop under drug pressure in order for HBV to survive in the presence of the NA.The development of drug resistance mutations can be affected by HBV DNA levels at baseline, rate of viral suppression, length of NA treatment, and prior exposure to NA treatment [58].Sequential treatment with different NAs, following drug failure, can lead to the development of multidrug resistance, which cannot be treated using currently available drugs [59].The most frequent lamivudine drug resistance mutants are rtM204V/I, which are also selected by the L-pyrimidine analogues, emitricitabine, clevudine, and telbivudine but are susceptible to the purine analogues adefovir and tenofovir [59].rtA181V develops following lamivudine treatment but is sensitive to other NAs, whereas rtN236T is resistant to adefovir only.In deciding on treatment options, the detection of genotypic resistance, which is defined as the detection of viral mutations conferring drug resistance, is a priority in clinics.Direct and NGS of the polymerase region of the HBV genome can detect both well-defined and novel mutations. Bioinformatics tools and databases have been used to better understand HBV mutations and how they develop, especially in response to antiviral therapy and vaccination.Although laboratory methods have been used to study mutations, they are both labor intensive and expensive and limited in the degree of complexity they can investigate.As a more economical alternative, bioinformatics and computer simulation can use available biological data, such as the protein sequence and structural information, to investigate interactions by virus, host, and the environment [60].Thus, Shen et al. [60] showed that most mutations develop in the hydrophobic regions of HBsAg and POL and that the amino acids that are more likely to be mutated are serine and threonine [60].Understanding how amino acids mutations develop in HBV proteins can facilitate the rational design of both vaccines and drugs [60], for the prevention and treatment of HBV infection, respectively.By the use of bioinformatics to compare viral and host genomic patterns, together with clinical information, to data from databases can lead to enhanced and individualized antiviral therapy. Bioinformatics challenges of HBV Despite its small genome size of ~3.2 kb, HBV presents several bioinformatic challenges: 1.The genome is circular, with position 1 conventionally taken to be the first "T" nucleotide in the EcoR1 restriction site ("GAATTC").Historically, position 1 was the start of the "Core" region, which is position 1901 in the current numbering system.Therefore, a number of sequences deposited earlier in the public databases are numbered using this outdated system and thus require processing before they can be used in alignments, together with more recently submitted sequences. 2. Four overlapping reading frames are encoded in the circular genome, whereas nucleotides or amino acids are sequenced and processed linearly.Extracting nucleotide or amino acid sequences for the S and POL ORFs, which span the EcoRI site, from full-length or subgenomic fragments, requires additional processing. 3. The differences in genome lengths between the nine HBV genotypes (ranging from 3182 to 3248 base pairs in length) mean that direct comparison of loci between genotypes is not always possible using the current numbering system.These differences in genome lengths result in genotype alignments containing several regions of gaps, ranging from 3 to 33 nucleotides in length.A possible solution is the implementation of a standardized "universal numbering system" for all HBV genotypes, which we are currently developing. 4. Sequence variability is a feature of HBV.It is, therefore, essential to check all sequences carefully, to distinguish between artifacts and true variation (mutations).Variation within a population at a locus may result in two overlapping peaks on a chromatogram.Superinfections or co-infections with different strains may result in mixed populations, which appear as multiple or misaligned peaks on sequencing chromatograms.Disambiguating these is essential for robust downstream analyses. Public sequence databases The first public sequence database, "GenBank," was established in 1982, having arisen from the earlier Los Alamos database, established in 1979 [61,62].Since then, the number of nucleotides in GenBank has doubled approximately every 18 months [63].The International Nucleotide Sequence Database Collaboration (INSDC) is a collection of three publicly available nucleotide (DNA or RNA) sequence databases, which synchronize data daily [64].The collection consists of the DNA DataBank of Japan (DDBJ, located in Japan), the European Molecular Biology Laboratory (EMBL, located in the United Kingdom) and GenBank (located in the United States of America).The latest release of the database (release 211.0, from 15 December, 2015; [65]) contains 189,232,925 loci and 203,939,111,071 bases, from 189,232,925 sequences, totaling approximately 742 gigabytes.In addition to the INSDC, many other databases exist, including genome databases, protein sequence, structure and interaction databases, microarray databases, and meta-databases.A list of biological databases on Wikipedia includes over 200 entries [66]. When searching for "hepatitis b virus" across all fields, the GenBank database [63], accessed on 27th January 2016, contained 105,745 sequences.When searching for "hepatitis b virus" in the "organism" field only, 84,119 sequences were found, with the oldest sequence submitted in the early 1980s.Refining this search to include only sequences of 200 nucleotides or longer, and excluding words such as "recombinant," "clone," and "patent," resulted in 68,762 sequences.When this same query was previously executed on 29 November 2015, 67,893 sequences were returned.Therefore, in the 59 days between the two queries, 869 new sequences (of at least 200 nucleotides in length, and not containing the words mentioned previously) were uploaded to GenBank.On average, this equates to almost 15 new HBV sequences added to GenBank per day. Making use of these sequences in downstream applications, such as multiple sequence alignments or phylogenetic analyses, is often challenging, as it is difficult to query for sufficient sequences, of the correct genotype, or subgenotype, and covering the required genomic region. In order to overcome this limitation, we have developed a bioinformatics solution, whereby all sequences matching a query are downloaded, curated, and aligned.The algorithm developed allows for the generation of a multiple sequence alignment for each genotype, which contains all the available sequences matching the query and in their correct position and orientation [67]. GenBank Submission PadSeq • Places two HBV sequence fragments on a backbone template ¶ Table modified from Bell and Kramvis [68].* Described for the first time here. Table 2. List of the online tools developed and the workflow process at which each would be used ¶ . Bioinformatics -Updated Features and Applications A standard molecular biology laboratory workflow includes DNA extraction, polymerase chain reaction (PCR) amplification, direct DNA sequencing, viewing and checking of chromatograms, preparation of curated sequences, multiple sequence alignment, sequence analysis, serotyping, genotyping, phylogenetic analysis, and preparation of sequences for submission to the GenBank public sequence database [68].Each of these steps presents data processing challenges, many of which have been addressed by the development of a suite of online tools (Table 2) [68].Any operating system platform from any location with an internet connection can be used to access stand-alone, web-based tools.There is no requirement to install and learn new bioin- formatics software, as these tools can be used when required.A system for processing ultradeep pyrosequencing (amplicon resequencing) data has also been developed [51].In addition, a number of HBV-specific websites and databases are currently available, a selection of which are represented in Table 3. New bioinformatics tools for HBV Here, we present two newly developed tools for the bioinformatic analysis of HBV. Divergence calculator [http://hvdr.bioinf.wits.ac.za/divergence/] One method of classifying HBV sequences into genotype or subgenotype is to examine nucleotide sequence divergence between sequences.This divergence calculation is performed by totaling the number of nucleotides, which differ, between two aligned sequences and computing the percentage difference.The divergence calculator (Figure 3) performs various divergence calculations on groups of sequences from nucleotide or amino acid multiple sequence alignments in FASTA format.A minimum of one group containing two sequences, or two groups containing one sequence each, must be specified.As an example, consider an alignment of 10 genotype A sequences (group 1) and 10 genotype D sequences (group 2).Intra-group divergence, for each group, is calculated by comparing each sequence in group 1 with each other sequence in group 1 and then calculating the median, mean, and standard deviation of the divergences.This is then repeated for group 2. The intergroup divergence compares each sequence in group 1 with each sequence in group 2, and then calculates the median, mean, and standard deviation.If more than two groups are specified, the calculations iterate over all groups in turn. If the optional "query" group is specified, the tool compares each sequence in the query group with each sequence in the other group or groups, but outputs statistics for each sequence in the query group individually.This method would typically be used with a set of unknown query sequences and one or more groups of reference sequences.A comprehensive list of descriptive statistics is included on the output page for each analysis. Random FASTA extraction and allocation (RAFAEL) [http://hvdr.bioinf.wits.ac.za/rafael/] In some analyses, particularly when constructing phylogenetic trees, it may be desirable to extract one or more random subsets of sequences from a master or reference alignment.The "RAFAEL" tool was designed to perform this task.This tool takes an input file in FASTA format, which does not have to be aligned and generates one or more subsets of the file, each containing a random selection of the specified number of sequences.The number of sequences may be specified as a count, or as a percentage of the number of sequences in the input file. There are guaranteed to be no duplicate sequences within each subset.However, duplicates may exist in multiple subsets, as subsets are not unique. Open-source software In addition to biological databases, a large variety of biological analysis software, which is generally genome agnostic, is available.As with software in any field, the licensing terms and commercial costs of these packages vary widely.Packages, which may be free of cost, may not necessarily be open-source, for example. The Free Software Foundation (FSF) [76,77] defines free software as software which "respects the users' freedom" in the sense that "users have the freedom to run, copy, distribute, study, change, and improve the software".As such, "free" is "a matter of liberty, not price".Free software, therefore, does not necessarily have to be made available at no cost or be a noncommercial project.Furthermore, software, which is provided at no cost, may not be "free" in the sense described above. The term "open-source" is often used when referring to "free" software.However, the two terms are not synonymous, although there is some overlap.Open-source software may, or may not, be free software, depending on the restrictions placed on users by the software.If the user is not free to distribute, change, and improve the software, even if it is open source, then it cannot be considered to be free software.Most software, for which a license is purchased, is not free, or open source.The user does not have the freedom to distribute the software, or to use it on any computer chosen. Recommended software A list of recommended freely available download software is presented in * "GUI" = graphical user interface, "CL" = command line interface, "OSS" = open-source software, "Lin" = GNU/Linux, "Mac" = Apple MacIntosh, "Win" = Microsoft Windows, "Emu" = emulator or virtual machine recommended by authors, "Com" = compilation from source code required. Table 4. Bioinformatics software available free of charge for various computer operating system platforms. Conclusion The unique genome structure and molecular biology of HBV pose a number of challenges, and thus, the development of bioinformatic tools has facilitated a more comprehensive and detailed analysis and understanding of the origin, evolution, transmission, and response to antiviral agents of HBV and its interaction with the host.There are a wide range of free and commercially available tools, which have been developed for different applications.The availability and applications of high-throughput sequencing techniques and the advancement of "-omics" will continue to provide additional challenges, which will need to be addressed by further computational solutions. Figure 1 .Figure 2 . Figure 1.The genome of hepatitis B virus (HBV).The partially double-stranded DNA (dsDNA) with the complete minus (−) strand and the incomplete (+) strand.The four open reading frames (ORFs) are shown: precore/core (preC/C) that encodes the e antigen (HBeAg) and core protein (HBcAg); P for polymerase (reverse transcriptase), PreS1/PreS2/S for surface proteins [three forms of HBsAg, small (S), middle (M), and large (L)] and X for a transcriptional trans-activator protein. •Wild-type 2 × 2 ••• Plots chromatogram quality scoresAutomatic contig generator tool• Generates a contig from a forward and reverse chromatogramAlignmentAutomatic alignment clean-up tool • Eliminates "gap-columns" and disambiguate ambiguous bases Mind the gap • Splits FASTA file based on gap threshold per column Analysis Babylon • Extracts HBV protein sequences (ORFs) Calculates 2 × 2 wild-type/mutant contingency tables Divergence calculator* • Intra-and Inter-group divergence with custom groups Rafael* Generates random subsets from an input FASTA file Serotyping Generates a phylogenetic tree Figure 3 . Figure 3.The input screen of the divergence calculator in which sequences are extracted and allocated to groups and other parameters specified. Table 3 . Currently available HBV websites and databases ¶ .
6,638.8
2016-07-27T00:00:00.000
[ "Medicine", "Computer Science", "Biology" ]
Non-Hermitian Sensing in Photonics and Electronics: A Review Recently, non-Hermitian Hamiltonians have gained a lot of interest, especially in optics and electronics. In particular, the existence of real eigenvalues of non-Hermitian systems has opened a wide set of possibilities, especially, but not only, for sensing applications, exploiting the physics of exceptional points. In particular, the square root dependence of the eigenvalue splitting on different design parameters, exhibited by 2 × 2 non-Hermitian Hamiltonian matrices at the exceptional point, paved the way to the integration of high-performance sensors. The square root dependence of the eigenfrequencies on the design parameters is the reason for a theoretically infinite sensitivity in the proximity of the exceptional point. Recently, higher-order exceptional points have demonstrated the possibility of achieving the nth root dependence of the eigenfrequency splitting on perturbations. However, the exceptional sensitivity to external parameters is, at the same time, the major drawback of non-Hermitian configurations, leading to the high influence of noise. In this review, the basic principles of PT-symmetric and anti-PT-symmetric Hamiltonians will be shown, both in photonics and in electronics. The influence of noise on non-Hermitian configurations will be investigated and the newest solutions to overcome these problems will be illustrated. Finally, an overview of the newest outstanding results in sensing applications of non-Hermitian photonics and electronics will be provided. Introduction In quantum mechanics, the HamiltonianĤ, describing a closed quantum system, is a Hermitian operator (Ĥ =Ĥ † ) [1]; it has real eigenvalues and orthogonal eigenstates, providing a complete basis in Hilbert space. Hermiticity guarantees the conservation of probability in an isolated quantum system [1]. During the 20th century, non-Hermitian Hamiltonians (Ĥ =Ĥ † ) were introduced to describe open systems [2]. Non-Hermitian Hamiltonians generally exhibit complex eigenvalues, and their eigenstates can be non-orthogonal. Non-Hermitian degeneracies happen at an exceptional point (EP) where two or more eigenvalues and the corresponding eigenstates coalesce simultaneously. The widespread recent interest in non-Hermitian Hamiltonians takes its origin from the pioneering study by Bender et al. [3] in 1998. They demonstrated that a particular family of non-Hermitian Hamiltonians commuting with the joint operations of the parity operator (P) and time operator (T) ([Ĥ, PT] = 0) exhibit entirely real spectra under certain ranges of the design parameters, with non-orthogonal eigenstates. The properties of EPs of paritytime-symmetric Hamiltonians inspired lots of works, both in fundamental and in applied research, dealing with several fields of science, including optics [4][5][6][7][8], acoustics [9][10][11], electronics [12,13], metamaterials [14][15][16][17], spintronics [18,19] optomechanics [20], and where |ψ is the state vector of the system. The Hamiltonian of a system is an operator corresponding to the total energy of that system. According to the Dirac formalism, the spectrum of the allowed energy levels of the system in stationary conditions is given by the set of eigenvalues {E}, solving the equation In quantum mechanics, the HamiltonianĤ is assumed to be Hermitian,Ĥ =Ĥ † (the superscript † represents the Hermitian adjoint, i.e., transposition plus complex conjugation) [1]. The Hermiticity ensures that all the eigenvalues, E, are real and also guarantees a unitary time evolution [1]. There are several systems that can be described by a Schrödinger-like equation, including also a source term: where a is the amplitude vector and D a driving term. In [3], Bender et al. discovered that Hermiticity is not a necessary condition forĤ to have real eigenvalues. In particular, there exists a whole class of non-Hermitian Hamiltonians that shows real eigenvalues. This non-exclusive class has the property of being PT-symmetric. Parity-Time Symmetry A system is PT-symmetric provided that its Hamiltonian commutes with the PT operator ([PT,Ĥ] = 0), meaning that PTĤ =ĤPT, (4) and, consequently, PTĤ(PT) −1 =Ĥ. (5) In other words, a Hamiltonian is PT-symmetric provided that it is invariant under the combined action of the P and T operations [3] (see Figure 1 for a graphical interpretation). In order to obtain the required condition for a Hamiltonian to be PT-symmetric, let us consider the parity operator, defined as P : and the time operator, defined as T : t → −t. (7) Sensors 2022, 22, x FOR PEER REVIEW 3 of 34 conjugation) [1]. The Hermiticity ensures that all the eigenvalues, E, are real and also guarantees a unitary time evolution [1]. There are several systems that can be described by a Schrödinger-like equation, including also a source term: where a is the amplitude vector and D a driving term. In [3], Bender et al. discovered that Hermiticity is not a necessary condition for Ĥ to have real eigenvalues. In particular, there exists a whole class of non-Hermitian Hamiltonians that shows real eigenvalues. This nonexclusive class has the property of being PT-symmetric. Parity-Time Symmetry A system is PT-symmetric provided that its Hamiltonian commutes with the PT operator ([PT, Ĥ] = 0), meaning that and, consequently, In other words, a Hamiltonian is PT-symmetric provided that it is invariant under the combined action of the P and T operations [3] (see Figure 1 for a graphical interpretation). In order to obtain the required condition for a Hamiltonian to be PTsymmetric, let us consider the parity operator, defined as and the time operator, defined as Let us now consider a simple 2 × 2 Hamiltonian, realized, for example, by coupling two generic resonators:Ĥ = −ω c,1 κ 12 κ 21 −ω c,2 where ω c, 1 and ω c,2 are the complex resonances of two resonators (ω c,1 (2) = ω 1(2) + iγ 1(2) ), with ω 1(2) the real resonance frequency and γ 1 (2) the decay rate in the resonator), and κ 12 and κ 21 are the coupling strengths between the two resonators. In this context, the driving term D in (3) can be expressed as D = (µ 1 s in1 , µ 2 s in2 ) T , where s in1 (2) is the input signal coupled to the first (second) resonator and µ 1(2) a coupling coefficient between s in1 (2) and the amplitude, a 1(2) , in the first (second) resonator. In (8), it has been implicitly considered that the time dependence is exp(iωt). This choice differs from the one adopted by the majority of papers in physics, but it is useful for being consistent with the classical conventions adopted in optics and electronics (phasor notation) of the next sections. Considering the matrix expression of the Hamiltonian in (7), the parity operator is the Pauli operator: and the T operator acts on the operators as So, to verify the condition necessary for the PT symmetry: It is immediately seen that the required condition for a 2 × 2 Hamiltonian to be PT-symmetric (see Equation (5)) are: For reciprocal coupling mechanisms (κ 12 = κ 21 = κ), the second requirement is equivalent to have κ be a real number. So, the PT-symmetric Hamiltonian is found to bê where ω 0 , γ and κ are real values. For two coupled resonators, ω 0 is the same resonant frequency for the two resonators, κ is the coupling strength between the resonators, and γ is the loss term (−γ can be seen as the linear gain; in our model we will neglect the effect of gain saturation). The set of eigenvalues {−ω PT } of the Hamiltonian is easily found by setting So, we obtain ω PT = ω 0 ± κ 2 − γ 2 . The two eigenfrequencies found can be designed to coalesce, for |κ| = |γ|, in ω PT = ω 0 . This design condition is called the "exceptional point" (EP). The exceptionality of this design condition arises from the fact that, as soon as a perturbation, ε, is applied to any of the parameters of the system (resonances, gain or loss of one resonator, coupling strength), the two eigenfrequencies split according to a square root function of the perturbation. In particular, there are three possible kinds of perturbation that can be interesting for sensing purposes: For |κ| > |γ|, the PT symmetry is called "unbroken" (exhibiting real eigenvalues), whereas, for |κ| < |γ|, the PT symmetry is called "broken" (exhibiting complex conjugate eigenvalues). In the unbroken PT symmetry, bifurcating eigenmodes appear. The eigenmodes oscillate and do not grow or decay Instead, in broken PT symmetry, the system is not in equilibrium; one eigenmode grows in time and the other decays in time. Perturbing Resonances in PT Symmetry When a perturbation ε ω is applied to one of the resonances of a PT-symmetric Hamiltonian (in the following, we will consider the first of the two resonances being perturbed), the new Hamiltonian becomeŝ The eigenfrequencies become With a design at the EP (|κ| = |γ|) and with ε ω << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation. The sensitivity of the eigenfrequencies splitting to the perturbation ε ω at the EP is proportional to the inverse of the square root of the perturbation (ε ω −1/2 ), thus leading to an infinite sensitivity for ε ω tending to zero. Perturbing Loss (Gain) in PT Symmetry When a perturbation ε γ is applied to the loss (gain) of one of the resonators (in the following, we will consider the first of the two resonators being perturbed), the new Hamiltonian becomeŝ The eigenfrequencies become With a design at the EP (|κ| = |γ|) and with ε γ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation and the sensitivity of the eigenfrequency splitting to the perturbation is infinite for ε γ tending to zero. Figure 2 shows the real part ( Figure 2a) and the imaginary part (Figure 2b) of the eigenfrequencies of a PT-symmetric Hamiltonian for different values of the perturbations ε ω and ε γ in the proximity of an EP. Black lines identify the region where ε ω = 0: it is possible to appreciate the square root dependence on ε γ in the proximity of the EP. Perturbing Coupling Strength in PT Symmetry When a perturbation εκ is applied to the coupling mechanism between the resonators (the coupling is supposed to be reciprocal), the new Hamiltonian becomes The eigenfrequencies become ω , = ω ± (κ + ε ) − γ . With a design at the EP (|κ| = |γ|) and with εκ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation and the sensitivity is proportional to the inverse of the square root of the perturbation. Anti-Parity-Time Symmetry A system is anti-PT-symmetric, provided that its Hamiltonian satisfies the anticommutation relation with the PT operator ({PT, Ĥ} = 0), meaning that: In other words, under the combined action of the P and T operations [27], the obtained Hamiltonian is the opposite of the starting one (see Figure 3 for a graphical interpretation). Perturbing Coupling Strength in PT Symmetry When a perturbation ε κ is applied to the coupling mechanism between the resonators (the coupling is supposed to be reciprocal), the new Hamiltonian becomeŝ The eigenfrequencies become With a design at the EP (|κ| = |γ|) and with ε κ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation and the sensitivity is proportional to the inverse of the square root of the perturbation. Anti-Parity-Time Symmetry A system is anti-PT-symmetric, provided that its Hamiltonian satisfies the anticommutation relation with the PT operator ({PT,Ĥ} = 0), meaning that: In other words, under the combined action of the P and T operations [27], the obtained Hamiltonian is the opposite of the starting one (see Figure 3 for a graphical interpretation). To find the conditions required for a Hamiltonian to be anti-PT-symmetric, let us apply the definition: It is immediately seen that, in order to be anti-PT-symmetric (see Equation (26)), a Hamiltonian requires: For reciprocal coupling mechanisms (κ12 = κ21), the second requirement is equivalent to having an imaginary coupling strength. So, the anti-PT-symmetric Hamiltonian is found to be where Δ, γ, and κ are real values. However, this condition is not realistically satisfiable by two coupled resonators, because it would require having at least one negative resonance frequency (without a physical meaning). Nonetheless, a Hamiltonian, ĤQAPT, describing two coupled resonators with different resonances (ω1 and ω2), the same loss (or gain), and imaginary coupling strength, can be rewritten in the form of an anti-PT-symmetric Hamiltonian after transforming the equation of motion to the frame rotating with the carrier frequency ω0 (with ω0 = (ω1 + ω2)/2). Starting from the Schrödinger-like equation: where (A1, A2) T is the amplitude vector, and applying the transformation to the rotating frame, we obtain To find the conditions required for a Hamiltonian to be anti-PT-symmetric, let us apply the definition: It is immediately seen that, in order to be anti-PT-symmetric (see Equation (26)), a Hamiltonian requires: For reciprocal coupling mechanisms (κ 12 = κ 21 ), the second requirement is equivalent to having an imaginary coupling strength. So, the anti-PT-symmetric Hamiltonian is found to beĤ where ∆, γ, and κ are real values. However, this condition is not realistically satisfiable by two coupled resonators, because it would require having at least one negative resonance frequency (without a physical meaning). Nonetheless, a Hamiltonian,Ĥ QAPT , describing two coupled resonators with different resonances (ω 1 and ω 2 ), the same loss (or gain), and imaginary coupling strength, can be rewritten in the form of an anti-PT-symmetric Hamiltonian after transforming the equation of motion to the frame rotating with the carrier frequency ω 0 (with ω 0 = (ω 1 + ω 2 )/2). Starting from the Schrödinger-like equation: where (A 1 , A 2 ) T is the amplitude vector, and applying the transformation to the rotating frame, we obtain i d dt with (A 1 , A 2 ) T the state vector in the rotating frame and ∆ = (ω 1 − ω 2 )/2. Since rotating the frequency frame of reference does not affect the properties of the eigenfrequencies, we can continue to refer toĤ QAPT as an anti-PT-symmetric Hamiltonian. This is the reason why the configuration in Figure 3 is referred as anti-PT symmetric. The set of eigenvalues ofĤ QAPT andĤ APT will only differ by ω 0 . Referring toĤ QAPT , the set of the eigenvalues {−ω APT } is easily found by setting So, we obtain The two eigenfrequencies found can be designed to coalesce, for |κ| = |∆|, in ω APT = ω 0 . This design condition corresponds to the EP. The exceptionality of this design condition arises from the fact that, as soon a perturbation, ε, is applied to any of the parameters of the involved system (isolated resonances, gain or loss of one resonator, coupling strength), the two eigenfrequencies split according to a square root function of the perturbation. For |∆| < |κ|, the anti-PT symmetry is called "unbroken", whereas, for |∆| > |κ|, the anti-PT symmetry is called "broken". In the unbroken anti-PT symmetry, the eigenmodes have the same resonance frequency but different linewidths. Instead, in broken anti-PT symmetry, bifurcating eigenmodes appear (with distinguishable resonant peaks). Perturbing Resonances in Anti-PT Symmetry When a perturbation ε ω is applied to one of the resonances of an anti-PT-symmetric Hamiltonian (in the following, we will consider the first of the two resonances being perturbed), the new Hamiltonian becomeŝ The eigenfrequencies become Without loss of generality, in the simplifying hypothesis of ω 1 > ω 2 and with a design at the EP (|κ| = |∆|), and with ε ω << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation. Perturbing Loss (Gain) in Anti-PT Symmetry When a perturbation ε γ is applied to the loss (gain) of one of the resonators (in the following, we will consider the first of the two resonators being perturbed), the new Hamiltonian becomeŝ The eigenfrequencies become Without loss of generality, in the simplifying hypothesis of ω 1 > ω 2, with a design at the EP (|κ| = |∆|), and with ε γ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation. Figure 4 shows the real part ( Figure 4a) and the imaginary part (Figure 4b) of the eigenfrequencies of an anti-PT-symmetric Hamiltonian for different values of the perturbations ε ω and ε γ in the proximity of an EP. Black lines identify the region where ε γ = 0: it is possible to appreciate the square root dependence on ε ω in the proximity of the EP. Without loss of generality, in the simplifying hypothesis of ω1 > ω2, with a design at the EP (|κ| = |Δ|), and with εγ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation. Figure 4 shows the real part ( Figure 4a) and the imaginary part (Figure 4b) of the eigenfrequencies of an anti-PT-symmetric Hamiltonian for different values of the perturbations εω and εγ in the proximity of an EP. Black lines identify the region where εγ = 0: it is possible to appreciate the square root dependence on εω in the proximity of the EP. Perturbing Coupling Mechanism in Anti-PT Symmetry When a perturbation, εκ, is applied to the coupling mechanism between the resonators (the coupling mechanism is supposed to be reciprocal), the new Hamiltonian becomes The eigenfrequencies become So, we obtain, with a design at the EP (|κ| = |Δ|), and with εκ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation. Perturbing Coupling Mechanism in Anti-PT Symmetry When a perturbation, ε κ , is applied to the coupling mechanism between the resonators (the coupling mechanism is supposed to be reciprocal), the new Hamiltonian becomeŝ The eigenfrequencies become So, we obtain, with a design at the EP (|κ| = |∆|), and with ε κ << |κ|: The result is that the splitting between the eigenfrequencies is proportional to the square root of the perturbation. Stability and Noise in Non-Hermitian Hamiltonians By definition, the PT-symmetric Hamiltonian is designed to work with the eigenfrequencies at the limit of stability. In fact, the time behaviour of the eigenmodes in the cavities can be easily obtained by using the found eigenfrequencies. An eigenfrequency ω A corresponds to an eigenmode E A , such that [53] The immediate consequence is that a negative imaginary part of an eigenfrequency leads to a divergent eigenmode. So, a PT-symmetric system at the EP, by its definition, is at the limit of stability (normally stable), because the coalesced eigenfrequencies in the unperturbed condition lie on the real axis. Any source of noise would make the system exit the stability condition, leading to the presence of divergent eigenmodes, and lasing would arise. Instead, by its definition, anti-PT symmetry at the EP can be stable, provided that γ > 0 (system not lasing). Figure 5, on the left column, shows the trajectories of the eigenfrequencies on the Gauss plane (Re{ω}, Im{ω}) for PT-symmetric and anti-PT-symmetric configurations in the presence of the perturbation of the resonance or of the gain (or loss) of one of the resonators. The grey half-plane represents the unstable region, i.e., the region where eigenfrequencies should not lie in order to prevent instability. As soon as one eigenfrequency lies in the unstable plane, the system becomes unstable, due to the presence of at least one divergent mode. The right column of Figure 5 shows the normalized energy in one of the resonators (proportional to a measurable output of the system) as a function of the normalized angular frequency and for the same values of the perturbations used in the corresponding graph on the left column. To obtain the normalized graph in the right column of Figure 5, input amplitudes s in1 = 1 and s in2 = 0 have been considered. Since the imaginary part of an eigenfrequency is proportional to the linewidth of the corresponding eigenmode, the fact that the eigenfrequencies of the PT-symmetric case lie on the real axis of the Gauss plane is an advantage for the resolution of the sensor. However, as seen, the proximity of the eigenfrequency with the half plane, with Im{ω} < 0, leads to instability. To overcome the problem of stability of PT symmetry, the concept of quasi-PT symmetry can be introduced. Quasi-PT symmetry can be useful also for implementing EP when introducing gain is not possible. Quasi-PT Symmetry Sometimes PT symmetry can be difficult to achieve because of the necessity of implementing gain. Moreover, it shows some problems of instability due to the presence of eigenfrequencies on the real axis of the complex plane. Several works [54,55] solved this issue by applying differential losses to the optical modes (A 1 and A 2 ) involved in the Hamiltonian. Let us consider the Schrödinger-like equation: We can use a variable transformation to introduce two auxiliary modes that experience a common gain with respect to A 1 and A 2 : where χ = (γ 1 + γ 2 )/2. In this way, a new Hamiltonian can be introduced, such that i d dt where ξ = (γ 1 − γ 2 )/2. In this way, the PT symmetry is verified. An EP still exists. This common practice, however, is different from the similar variable change applied in the anti-PT-symmetric case and has some drawbacks in the measurable outputs. In fact, a common loss can spoil the resolvability of the resonances, due to the broadening of the linewidths (see [56]). Classical Noise in Non-Hermitian Hamiltonians The incredibly enhanced sensitivity to target parameters of non-Hermitian Hamiltonians has its immediate drawback in the enhanced sensitivity also to unwanted perturbation and noise. The influence of classical noise has been investigated in [21]. Including noise in the total non-Hermitian Hamiltonian of a system; i.e., considering a noisy EP, it is possible to obtainĤ where K is the number of statistically independent noise sources and the ξ j are real-valued fluctuations with a zero mean, whereasĤ noise,j describe the fluctuations of the matrix elements of the total Hamiltonian. Several papers have studied the influence of noise on the EP. In [57], the authors investigated the influence of mesoscopic fluctuations and noise on the spectral and temporal properties of systems of PT-symmetric-coupled gain-loss resonators at the EP. By considering an inevitable detuning of the resonance frequencies of the isolated resonators, the authors obtain that statistical averaging significantly smears the spectral features, thus limiting the sensitivity of EP-based sensors. Moreover, they showed that temporal fluctuations in the resonance detuning and gain lead to a quadratic growth of the optical power in time, meaning dynamical instability. The numerical simulations in [58] showed an exponential divergence of the eigenstates due to the presence of noise. The authors say that maintaining operation at the EP for enough time to detect resonance splitting requires very careful design of a feedback system. Nonlinearities that could prevent divergence are not included in the modelling. In practice, for EP-based sensors operating close to the real frequency axis (as it happens in the PT-symmetric case), even a small noise can lead to instability, thus spoiling the resolvability. This does not mean that EP-based sensors are fundamentally limited by classical noise [21]. The instability can be removed, for example, by uniform damping of the sensor, thus realizing a quasi-PT-symmetric version of the sensor. However, as said, this would broaden the linewidths [56], thus reducing the resolution. Quantum Noise Different from classical noise, quantum noise may fundamentally limit EP-based sensing [59][60][61][62]. The theoretical approaches to analyse the quantum noise have been developed from the hypothesis of a weak dispersive limit, where the frequency splitting is much lower than the linewidths [21]. However, most of the experimental works have been performed away from this condition. The problem raised in the quantum noise studies about non-Hermitian systems arises from the fact that the frequency splitting is not a direct measurement but derives from measurement of the fields. In [59], it has been demonstrated that the changes in the fields in lowest order are proportional to the perturbation, both at the diabolical points [63] (where the splitting of the resonances is proportional to the perturbation) and at the EP. This would imply an equal scaling in the quantum-limited signal-to-noise ratio for EPs and diabolic points. The same conclusion has been reached in [60], where it is demonstrated that an upper bound of the signal-to-noise ratio is the same independent of whether the sensor is at an EP or not. The bound obtained in [60], however, is only limited to reciprocal sensors. Non-reciprocity is seen as a good source to be exploited for higher performance sensing (see Section 8). In [61], the signal-to-noise ratio bound in the quantum regime has been demonstrated to be better for an EP-based sensor near its lasing threshold, using a linearization approach (that could represent a limit in their analysis). In [62], the authors observed that the diverging frequency splitting enhancement of a Brillouin-based optical gyroscope at the EP is exactly compensated by a diverging broadening of the laser linewidths, due to the non-orthogonality of the counterpropagating modes. The factor of the linewidth broadening is called the Petermann factor and is due to the coalescence of the eigenmodes at the EP. The topic of the noise at the EP is still an open issue; so, a lot of research has been performed to realize sensors able to prevent the destructive effect of noise (see Sections 8, 11 and 12. Limit of Detection The strong spectral response of PT-and anti-PT-symmetric Hamiltonians is expressed in the proximity of EPs. Far from these operating conditions, the systems behave like diabolic points. So, since it is crucial to have a design in the proximity of the EP, we need the radicands in Equations (16) and (35) to be exactly at the EP. In order to be able to detect a perturbation ε (applied for example to the loss or gain or to the resonance), in the proximity of an EP, we need that where So, the limit of detection of the sensor is defined by the possibility of keeping the system at the EP, with the aid of a feedback loop. Exceptional Surface As demonstrated, in PT-symmetric implementation, the resonant frequencies of the two resonators need to be identical and the gain and loss need to be perfectly balanced. Finally, the coupling strength between the resonators needs to perfectly match the difference between gain and loss. Instead, in an anti-PT-symmetric device, the gains or losses of the two resonators need to be the same and the indirect coupling strength needs to match the difference between the isolated resonances. Several researchers make use of feedback techniques to tune the system actively and continuously, setting it in the proximity of the EP (using micro-heaters, tuneable coupling methods, etc.). However, in this way the resolution of the sensor needs to be aligned to the resolution of the transducer of the active control. Moreover, it would be extremely useful for practical sensing application to propose a new design to decouple unwanted fabrication errors and experimental uncertainties from the target perturbations caused by the sensing mechanism. Therefore, in [64], Zhong et al. proposed the idea of a hypersurface of EPs, called exceptional surface (ES). The idea is to make the condition of the EP insensitive to perturbations that are not related to the sensing principle. This idea is achieved by coupling two counterpropagating optical modes inside the same cavity (as in Figure 6), rather than using two separated coupled cavities. The architecture in Figure 6 can be described with the effective Hamiltonian H ES : where a cw and a ccw are the field amplitudes of the clockwise (CW) and counterclockwise (CCW) modes, ω 0 is the optical resonance frequency of the optical cavity (same for CW and CCW modes), γ is the common loss per time unit, and κ 1 (κ 2 ) is the coupling strength between the CCW and the CW modes (CW and CCW). exceptional surface (ES). The idea is to make the condition of the EP insensitive to perturbations that are not related to the sensing principle. This idea is achieved by coupling two counterpropagating optical modes inside the same cavity (as in Figure 6), rather than using two separated coupled cavities. The architecture in Figure 6 can be described with the effective Hamiltonian HES: where and are the field amplitudes of the clockwise (CW) and counterclockwise (CCW) modes, ω is the optical resonance frequency of the optical cavity (same for CW and CCW modes), γ is the common loss per time unit, and κ (κ ) is the coupling strength between the CCW and the CW modes (CW and CCW). The eigenfrequencies of the system are easily found in the harmonic regime: The main result, in this case, is that the EP is achieved when one of the two coupling strengths, κ 1 or κ 2 , is equal to zero. In this case, any perturbation to the resonance ω 0 or to the gain γ does not perturb the EP. Thus, there is a hypersurface of EPs, which can be called an exceptional surface (ES). For κ 1 equal to zero, the eigenfrequencies difference depend on the square root of κ 2 , thus representing an important advantage for sensing. Figure 6 illustrates schematically the sensing principle. However, the proposed architecture is not as versatile as the parity-time and antiparity-time-symmetric systems presented before. This kind of configuration is useful for sensing principles only when the perturbations on the coupling strength represent the target of the sensing. PT-Symmetric Optical Potential Non-Hermitian Hamiltonians have been studied and developed especially in optics. The first demonstration of the possibility of realizing PT symmetry in optics was done in [24]. Then, a parallelism between the potential in the Schrödinger equation and the refractive index made it possible to conceptualize a new variety of optical PT-symmetric Hamiltonians. The paraxial equation of diffraction in optics is [7] i dE(x, z) dz + 1 2k where E(x,z) is the electric field envelope, n r (x) and n i (x) are the real and the imaginary part of the refractive index distribution, respectively, and k 0 = 2π n 0 /λ, with λ the wavelength of the field in vacuum and n 0 the substrate index. This equation is widely known to be mathematically isomorphic to the Schrödinger equation: which can be written in the Hamiltonian form as withp the momentum:p In order to be PT-symmetric, the Hamiltonian needs to be invariant under the parity (P performs x → −x,p → −p) and time reversal operators The isomorphism between the paraxial equation of diffraction in optics (Equation (54)) and the Schrödinger equation (Equation (55)) (with z playing the role of time and n(x) playing the role of the optical potential) suggests that the optical potential should have the real part (n r (x)) as an even function (n r (x) = n r (−x)) and the imaginary part (n i (x)) as an odd function (n i (x) = −n i (−x)). Non-Hermitian Hamiltonians with Optical Waveguides The isomorphism between the paraxial equation of diffraction in optics and the Schrödinger equation suggests that, in order to have a parity-time-symmetric Hamiltonian, it is sufficient to have the real part of the refractive index as an even function and its imaginary part as an odd function. According to this result, two parallel optical waveguides with the same real part of the refractive index and with opposite imaginary parts realize a PT-symmetric Hamiltonian. An easy way to verify this is to use the coupled mode theory. In the weak coupling approximation (and neglecting self-coupling), and denoting with b 1 and b 2 the mode amplitudes in two adjacent waveguides, we have [70] (with the implicit time dependence exp(iωt)) where β 1 , β 2 are the propagation constants of the modes b 1 and b 2 , respectively, κ 12 , κ 21 are the coupling coefficients between the two waveguides, and z is the propagation direction. Equations (59) and (60) can be rewritten as So, it is possible to make it PT symmetric using complex propagation constants (considering the effect of gain and loss). In particular, Figure 7a shows that is possible to set up PT-symmetric waveguides by using two directly coupled optical waveguides, one with gain and the other one lossy. In [27], the authors demonstrated that in the hypothesis of κ1 ≈ κ2 and γ << |κ|, mode c can be adiabatically eliminated, and an effective anti-PT-symmetric Hamiltonian can be obtained: where β = (β1+ β2)/2, Δ = (β1 − β2)/2, and Γ = |κ| 2 /γ. Figure 7b shows the implementation of anti-PT symmetry by means of an auxiliary intermediate dissipative waveguide, making it possible to obtain an imaginary coupling strength. Recently, an experimental demonstration of anti-PT-symmetric optical waveguides has been reported in [71]. Effective Non-Hermitian Hamiltonians with Optical Resonators We have demonstrated that the refractive index acts as an optical potential. So, having the real part of the distribution of the refractive index as an even function and the imaginary part as an odd function, gives rise to a PT-symmetric Hamiltonian. There is a simple way to study PT-symmetric optical resonators using the coupled mode theory proposed in [72]. In particular, a useful formalism to study energy exchanges between the optical resonators was proposed, with a time-domain coupled-mode theory, typical of electronic circuits. Two evanescently coupled optical resonators can be described in the time domain as [72] where a1(2) represents the energy amplitude in the first (second) cavity, normalized so that |a1(2)| 2 is the total energy stored in the first (second) resonator, and ω1(2) and γ1(2) are the resonance angular frequency and the photon decay rate, respectively, of the first (second) resonator and k is coupling strength between the resonators. These two equations can be particularized in two special cases: • ω1 = ω2 = ω0, γ1 = −γ2 = γ with k a real value (k = κ); • ω1 ≠ ω2, γ1 = γ2, with k an imaginary value (k = iκ). The first case corresponds to an effective PT-symmetric Hamiltonian, whereas the second case becomes anti-PT-symmetric. The PT-symmetric configuration can be easily realized by directly coupling two optical resonators (Figure 8a), whereas the anti-PT-configurations can be realized by In [27], it has been demonstrated that coupling two optical waveguides to realize an effective anti-PT-symmetric Hamiltonian is also possible using a central auxiliary dissipative waveguide. In particular, considering the coupled mode theory for three coupled waveguides (as in Figure 7b), it is possible to obtain where c is the mode amplitude in the central waveguide, γ represents the loss rate in a central auxiliary waveguide, and κ 1 and κ 2 are the coupling strengths between the external waveguides and the central one. Recently, an experimental demonstration of anti-PT-symmetric optical waveguides has been reported in [71]. Effective Non-Hermitian Hamiltonians with Optical Resonators We have demonstrated that the refractive index acts as an optical potential. So, having the real part of the distribution of the refractive index as an even function and the imaginary part as an odd function, gives rise to a PT-symmetric Hamiltonian. There is a simple way to study PT-symmetric optical resonators using the coupled mode theory proposed in [72]. In particular, a useful formalism to study energy exchanges between the optical resonators was proposed, with a time-domain coupled-mode theory, typical of electronic circuits. Two evanescently coupled optical resonators can be described in the time domain as [72] i where a 1(2) represents the energy amplitude in the first (second) cavity, normalized so that |a 1(2) | 2 is the total energy stored in the first (second) resonator, and ω 1(2) and γ 1(2) are the resonance angular frequency and the photon decay rate, respectively, of the first (second) resonator and k is coupling strength between the resonators. These two equations can be particularized in two special cases: • ω 1 = ω 2 = ω 0 , γ 1 = −γ 2 = γ with k a real value (k = κ); • ω 1 = ω 2 , γ 1 = γ 2 , with k an imaginary value (k = iκ). The first case corresponds to an effective PT-symmetric Hamiltonian, whereas the second case becomes anti-PT-symmetric. The PT-symmetric configuration can be easily realized by directly coupling two optical resonators (Figure 8a), whereas the anti-PT-configurations can be realized by indirectly dissipative coupling (with the same adiabatic approximation for anti-PT-symmetric waveguides), as shown in Figure 8b,c. Non-Hermitian Sensing on Photonic Integrated Chips The high sensitivity exhibited by EPs makes it possible to realize high-performance miniaturized sensors; that is the reason for the high interest in non-Hermitian photonics, especially with photonic integrated chips (PICs). There are some works in free-space optics [73], or those with fibre optics [74,75], related to non-Hermitian Hamiltonians, but the high interest in sensitivity enhancement is mainly oriented to the on-chip integration of the sensors. Both the non-resonant parallel-waveguided configuration and the ringresonant one can be easily integrated in a PIC, to realize a miniaturized sensor for different applications. In the literature regarding sensing applications, the non-Hermitian configurations realized with waveguided optical resonators (Figure 8) are preferred to the non-resonant waveguide-based ones (Figure 7). The eigenfrequencies of a non-Hermitian photonic system based on resonant cavities can be evaluated by simply measuring the frequencies of the peaks in the transfer function (see Figure 5). Experimentally, the transfer function versus the wavelength can be obtained in three different ways. The first solution requires a broadband source (either integrated or external) and a highly selective spectrometer (either integrated or external) to reconstruct the output spectrum. In the second case a tuneable narrow laser source (integrated or external) is used to scan the spectrum and a Non-Hermitian Sensing on Photonic Integrated Chips The high sensitivity exhibited by EPs makes it possible to realize high-performance miniaturized sensors; that is the reason for the high interest in non-Hermitian photonics, especially with photonic integrated chips (PICs). There are some works in free-space optics [73], or those with fibre optics [74,75], related to non-Hermitian Hamiltonians, but the high interest in sensitivity enhancement is mainly oriented to the on-chip integration of the sensors. Both the non-resonant parallel-waveguided configuration and the ringresonant one can be easily integrated in a PIC, to realize a miniaturized sensor for different applications. In the literature regarding sensing applications, the non-Hermitian configurations realized with waveguided optical resonators (Figure 8) are preferred to the non-resonant waveguide-based ones (Figure 7). The eigenfrequencies of a non-Hermitian photonic system based on resonant cavities can be evaluated by simply measuring the frequencies of the peaks in the transfer function (see Figure 5). Experimentally, the transfer function versus the wavelength can be obtained in three different ways. The first solution requires a broadband source (either integrated or external) and a highly selective spectrometer (either integrated or external) to reconstruct the output spectrum. In the second case a tuneable narrow laser source (integrated or external) is used to scan the spectrum and a photodetector is used to collect the optical power and reconstruct the transfer function versus the wavelength. With the third method, a broadband source can be used to excite both resonant peaks corresponding to the eigenfrequencies of the non-Hermitian system (in case of real splitting between the eigenfrequencies). In this way, a single photodetector at the output would read the beating between the resonance peaks. So, the Fourier transform of the photogenerated current would show a resonant peak at a frequency equal to the difference between the eigenfrequencies of the non-Hermitian sensor, making the readout really simple. All three of the mentioned methods require an electronic readout that can be either integrated on the chip or external (Figure 9). All three of the mentioned methods require an electronic readout that can be either integrated on the chip or external (Figure 9). As seen, PT symmetry strictly requires gain, so a platform providing active materials is needed, as provided with indium phosphide (InP) or gallium arsenide (GaAs). On the contrary, anti-PT symmetry and quasi-PT symmetry can be realized with a fully passive platform such as silicon on insulator (SOI). PT Symmetric Electronics Oscillators Parity-time symmetry can be achieved also with electronics oscillators. Intuitively, by coupling two electronic oscillators, it is possible to realize Hamiltonians equivalent to the ones realized in optics. Figure 10 shows a possible implementation of a PT-symmetric Hamiltonian, implementing two resonators coupled with a capacitance CC. One of the resonators includes gain thanks to the presence of the negative resistance. A ground referenced negative resistance can be easily realized with a resistor and an amplifier [13] ( Figure 11). As seen, PT symmetry strictly requires gain, so a platform providing active materials is needed, as provided with indium phosphide (InP) or gallium arsenide (GaAs). On the contrary, anti-PT symmetry and quasi-PT symmetry can be realized with a fully passive platform such as silicon on insulator (SOI). PT Symmetric Electronics Oscillators Parity-time symmetry can be achieved also with electronics oscillators. Intuitively, by coupling two electronic oscillators, it is possible to realize Hamiltonians equivalent to the ones realized in optics. Figure 10 shows a possible implementation of a PT-symmetric Hamiltonian, implementing two resonators coupled with a capacitance C C . One of the resonators includes gain thanks to the presence of the negative resistance. A ground referenced negative resistance can be easily realized with a resistor and an amplifier [13] ( Figure 11). All three of the mentioned methods require an electronic readout that can be either integrated on the chip or external (Figure 9). As seen, PT symmetry strictly requires gain, so a platform providing active materials is needed, as provided with indium phosphide (InP) or gallium arsenide (GaAs). On the contrary, anti-PT symmetry and quasi-PT symmetry can be realized with a fully passive platform such as silicon on insulator (SOI). PT Symmetric Electronics Oscillators Parity-time symmetry can be achieved also with electronics oscillators. Intuitively, by coupling two electronic oscillators, it is possible to realize Hamiltonians equivalent to the ones realized in optics. Figure 10 shows a possible implementation of a PT-symmetric Hamiltonian, implementing two resonators coupled with a capacitance CC. One of the resonators includes gain thanks to the presence of the negative resistance. A ground referenced negative resistance can be easily realized with a resistor and an amplifier [13] ( Figure 11). . PT-symmetric electronic configuration realized with two coupled RLC resonators (proposed in [13]). Figure 11. Realization of the negative resistor with a resistor and a 2× amplifier. Figure 11. Realization of the negative resistor with a resistor and a 2× amplifier. Considering Figure 10, using the Kirchhoff laws for voltages and currents, we obtain and Combining Equations (69)-(72) it is possible to arrive at Defining k = M/L, in the hypothesis of k << 1 and approximating ω (ω n /ω 2 − 1)/2 ≈ ω n − ω, it is possible to arrive at For time-harmonic voltages, Equation (74) is equivalent to the Schrödinger equation with a PT-symmetric Hamiltonian. So, it is possible to create a PT-symmetric electronic system by coupling two resonators with opposite gains by means of a coupling capacitance or a mutual inductance (or both). Figure 12 schematizes the idea of two coupled resonators for anti-PT symmetry proposed in [76]. Anti-PT-Symmetric Electronic Oscillators Combining Kirchhoff's laws, it is possible to obtain Combining them in a matrix form: For R = R C , we obtain where For time-harmonic voltages, Equation (81) is equivalent to the Schrödinger equation with an anti-PT-symmetric Hamiltonian. So, it is possible to realize anti-PT-symmetry with two electronic resonators coupled with a resistor. Combining Equations (69) Defining k = M/L, in the hypothesis of k << 1 and approximating ω (ωn/ω 2 − 1)/2 ≈ ωn − ω, it is possible to arrive at For time-harmonic voltages, Equation (74) is equivalent to the Schrödinger equation with a PT-symmetric Hamiltonian. So, it is possible to create a PT-symmetric electronic system by coupling two resonators with opposite gains by means of a coupling capacitance or a mutual inductance (or both). Figure 12 schematizes the idea of two coupled resonators for anti-PT symmetry proposed in [76]. Non-Hermitian Sensing with Electronic Boards The majority of non-Hermitian sensors in electronics has been developed on printed circuit boards (PCBs) using discrete components soldered onto it. A lot of interest has been shown for non-Hermitian telemetry: two electronic systems (one active reader realized with a PCB or chip, including a source and a resonator, and the other a passive electronic resonator) communicate wirelessly between each other, thanks to the mutual coupling between inductors (see Figure 13) (the passive sensor can be even implanted under the skin for biological sensing [77]). In this way a non-Hermitian Hamiltonian is realized and the perturbation to the sensor or to the mutual coupling between the resonators is enhanced, thus realizing high sensitivity. Combining Kirchhoff's laws, it is possible to obtain Combining them in a matrix form: For R = RC, we obtain where For time-harmonic voltages, Equation (81) is equivalent to the Schrödinger equation with an anti-PT-symmetric Hamiltonian. So, it is possible to realize anti-PT-symmetry with two electronic resonators coupled with a resistor. Non-Hermitian Sensing with Electronic Boards The majority of non-Hermitian sensors in electronics has been developed on printed circuit boards (PCBs) using discrete components soldered onto it. A lot of interest has been shown for non-Hermitian telemetry: two electronic systems (one active reader realized with a PCB or chip, including a source and a resonator, and the other a passive electronic resonator) communicate wirelessly between each other, thanks to the mutual coupling between inductors (see Figure 13) (the passive sensor can be even implanted under the skin for biological sensing [77]). In this way a non-Hermitian Hamiltonian is realized and the perturbation to the sensor or to the mutual coupling between the resonators is enhanced, thus realizing high sensitivity. Sensing Applications of Non-Hermitian Photonics In this section, some of recent sensing schemes applied in photonics will be shown. The two most studied applications of non-Hermitian sensing in photonics are optical gyroscopes and particle sensors. Non-Hermitian Optical Gyroscopes In [78], Ren et al. proposed for the first time a PT-symmetric optical gyroscope. A gyroscope is a sensor able to measure the angular velocity of its frame with respect to an inertial system. According to the Sagnac effect, the resonance frequency shift of a single isolated rotating optical ring resonator with respect to a rest condition is [79] ∆ω where λ is the wavelength in vacuum, R i the radius of the i-th ring resonator, n eff is the effective index of the optical waveguides and Ω is the angular velocity of the frame. The minus or plus sign is chosen if the mechanical rotation is in the same or opposite sense, respectively, of the rotation of the optical beam in the resonator. The PT-symmetric gyroscope presented in [78] is based on the standard PT-symmetric structure realized with two coupled resonators with perfectly balanced gain and loss ( Figure 14a) and with the same radius. A splitting between the real part of the eigenfrequencies has been demonstrated to be where κ is the coupling strength between the cavities. Since the coupling strength is inversely proportional to the radius of the ring resonators, the spectral splitting is independent of the size of the device. This explains the wide interest in research for non-Hermitian gyroscopes. The authors demonstrated that the gyroscope exhibits a sensitivity enhancement with respect to the resonance frequency shift in a single ring, equal to ∆ω PT ∆ω Ω,i = |2κ/∆ω Ω,i |. (87) In [80] the PT-symmetric gyroscope has been theoretically studied, introducing doubts about the existence of a measurable splitting on the output transfer function of the sensor, due to the complex splitting between eigenfrequencies during rotation. Later, anti-PTsymmetric versions of the optical gyroscope were proposed to overcome the problems of instability illustrated in Section 5 ( Figure 14b): in [81], a U-shaped waveguide was used to indirectly couple two resonators, whereas in [82] a single bus between the resonators was proposed as a more stable solution for realizing the anti-PT-symmetric gyroscope, with a proposal of integrating this solution in the InP platform. In [73], a ring laser gyroscope was set up in the proximity of an EP (Figure 14c). The device was realized in free space by inserting a Faraday rotator and a half-wave plate inside the optical resonator, realizing a non-reciprocal loss for the counterpropagating optical modes, clockwise (CW) and counterclockwise (CCW). According to experimental results, an enhancement factor 20 was obtained for the resonance splitting in the vicinity of an EP. In [83], a very high quality microdisk is used to realize a new kind of non-Hermitian gyroscope (Figure 14d). The stimulated Brillouin effect leads to the lasing of counterpropagating modes in the microdisk, with ultranarrow linewidths. Moreover, the Brillouin effect perturbs the resonating frequencies even in the absence of rotation, thus leading to an effective anti-PT-symmetric Hamiltonian. By adjusting the pump frequency, it is possible to reach an EP. The angular velocity, Ω, leads to a perturbation of the Hamiltonian of the system. Experimental results demonstrated the expected enhancement in the spectral response of the gyroscope by a factor of 4. Schematic of an anti-parity-time-symmetric gyroscope (proposed in [81]). (c) Non-Hermitian ring laser gyroscope (proposed in [73]). (d) Brillouin laser gyroscope at the EP (proposed in [83]). By adjusting the pump frequency, it is possible to reach an EP. The angular velocity, Ω, leads to a perturbation of the Hamiltonian of the system. Experimental results demonstrated the expected enhancement in the spectral response of the gyroscope by a factor of 4. Particle Sensing: Exceptional Points and Exceptional Surfaces Another highly investigated application of non-Hermitian photonics is particle sensing. In [84], Wiersig demonstrated the possibility of applying EPs to single nanoparticle sensing (Figure 15a). The effective Hamiltonian for the microdisk with N nanoparticles in the travelling-wave basis (CCW, CW) is given by (adapted to the timeharmonic convention here adopted) with The quantities 2Vk and 2Uj represent the complex frequency shifts for the positive and negative parity modes due to particle j alone. Wiersig proposed a system with three particles, two of them generating the EP, and the third one being the perturbing one. A diabolic point is realized when B (2) = A (2) = 0 (no scattering between CW and CCW travelling waves), whereas an EP results in B (2) = 0 or A (2) = 0; this principle is behind what has later been introduced as an exceptional surface, because the use of a single optical cavity ensures a hypersurface of EP. [78]). (b) Schematic of an anti-parity-time-symmetric gyroscope (proposed in [81]). (c) Non-Hermitian ring laser gyroscope (proposed in [73]). (d) Brillouin laser gyroscope at the EP (proposed in [83]). Particle Sensing: Exceptional Points and Exceptional Surfaces Another highly investigated application of non-Hermitian photonics is particle sensing. In [84], Wiersig demonstrated the possibility of applying EPs to single nanoparticle sensing (Figure 15a). The effective Hamiltonian for the microdisk with N nanoparticles in the travelling-wave basis (CCW, CW) is given by (adapted to the time-harmonic convention here adopted)Ĥ with The quantities 2V k and 2U j represent the complex frequency shifts for the positive and negative parity modes due to particle j alone. Wiersig proposed a system with three particles, two of them generating the EP, and the third one being the perturbing one. A diabolic point is realized when B (2) = A (2) = 0 (no scattering between CW and CCW travelling waves), whereas an EP results in B (2) = 0 or A (2) = 0; this principle is behind what has later been introduced as an exceptional surface, because the use of a single optical cavity ensures a hypersurface of EP. Wiersig demonstrated that in the presence of the third particle, at a diabolic point, the induced complex frequency splitting is given by Instead, in the presence of an EP, the splitting is given by (for B (2) =0) If the square root is larger than one, the splitting at the EP is enhanced. In [85], the enhancement of the splitting predicted by Wiersig was experimentally demonstrated in an optical microcavity. In a log-log graph, the slope of the splitting with respect to the perturbation equal to 1/2 was demonstrated, being different from the slope of 1 at the diabolic point. Later, in [86], an anti-PT-symmetric device was proposed for particle sensing. In [87], a spinning resonator (rotating around its centre) was proposed for reaching the EP in an antiparity-time-symmetric system, realized with a single cavity, for ultrasensitive nanoparticle sensing (see Figure 15b). In particular, the rotation induces a difference between the two resonances of the counterpropagating modes (see Equation (85)), thus making it possible to obtain an anti-PT-symmetric Hamiltonian. The indirect coupling mechanism necessary for the anti-PT-symmetric configuration is realized with an external fibre, implementing an optical isolator. Instead, in the presence of an EP, the splitting is given by (for B =0) If the square root is larger than one, the splitting at the EP is enhanced. In [85], the enhancement of the splitting predicted by Wiersig was experimentally demonstrated in an optical microcavity. In a log-log graph, the slope of the splitting with respect to the perturbation equal to 1/2 was demonstrated, being different from the slope of 1 at the diabolic point. Later, in [86], an anti-PT-symmetric device was proposed for particle sensing. In [87], a spinning resonator (rotating around its centre) was proposed for reaching the EP in an anti-parity-time-symmetric system, realized with a single cavity, for ultrasensitive nanoparticle sensing (see Figure 15b). In particular, the rotation induces a difference between the two resonances of the counterpropagating modes (see Equation (85)), thus making it possible to obtain an anti-PT-symmetric Hamiltonian. The indirect coupling mechanism necessary for the anti-PT-symmetric configuration is realized with an external fibre, implementing an optical isolator. In [88], a whispering gallery mode parity-time-symmetric nanoparticle sensor has been proposed. The presence of gain in the PT-symmetric configuration allows the narrowing of the linewidths, helping to increase the resolution, thus improving the limit of detection of nanoparticles. In [64], the concept of ES was introduced for the first time by Zhong et al. The idea proposed by the authors was to exploit the EPs to enhance the sensitivity of a particle sensor, without being subject to undesired perturbations. The solution was implemented with the architecture in Figure 15c. Figure 15. (a) Schematic of the microdisk resonator for particle sensing at an EP (two particles realize the EP and the third one is sensed) (proposed in [84]). (b) Schematic of an anti-PT-symmetric particle sensor exploiting rotation to reach the EP (architecture proposed in [87]). (c) Schematic of the Figure 15. (a) Schematic of the microdisk resonator for particle sensing at an EP (two particles realize the EP and the third one is sensed) (proposed in [84]). (b) Schematic of an anti-PT-symmetric particle sensor exploiting rotation to reach the EP (architecture proposed in [87]). (c) Schematic of the exceptional surface configuration for particle sensing proposed in [64]. (d) Schematic of the implementation of a microsphere resonator for particle sensing at the exceptional surface: the nonreciprocal coupling between counterpropagating modes is realized via an optical isolator in the external coupling fibre (architecture proposed in [68]). In [88], a whispering gallery mode parity-time-symmetric nanoparticle sensor has been proposed. The presence of gain in the PT-symmetric configuration allows the narrowing of the linewidths, helping to increase the resolution, thus improving the limit of detection of nanoparticles. In [64], the concept of ES was introduced for the first time by Zhong et al. The idea proposed by the authors was to exploit the EPs to enhance the sensitivity of a particle sensor, without being subject to undesired perturbations. The solution was implemented with the architecture in Figure 15c. A scattering matrix method was used to derive the transfer function and the difference between the eigenvalues φ was obtained: where r p is the amplitude reflection of the particle to be sensed, r m the effective unidirectional coupling from CW mode to CCW mode, and η 2 is the nondimensional power coupling coefficient between the waveguide and ring resonator. For very small values of r p , the splitting between the eigenvalues becomes For r p << η 2 , the splitting is proportional to the square root of the perturbation, r p , in perfect agreement with the condition of the EPs. The advantage of the proposed solution is that any undesired perturbation to the cavity does not affect the condition of the EP, making the entire system much more robust than classical EP-based sensors. In [68], an integrated ES-based particle sensor has been experimentally realized, demonstrating the enhancement in the frequency splitting caused by small perturbations. The non-reciprocal coupling between counterpropagating modes CW and CCW in a silica microsphere was realized with an optical tapered fibre coupled twice from two sides of the microsphere (see in Figure 15d): the presence of a fibre-based optical isolator in the coupling fibre realized the nonreciprocal coupling between the counterpropagating modes. In [89], a nonreciprocal coupling between the counterpropagating optical modes in a single optical resonator is proposed to minimize the detection limit, thanks to the fact that the two optical modes do not degenerate at the EP. Other Sensing Applications of Non-Hermitian Optics There are several other applications of non-Hermitian optics. In [90], a higher-order EP has been experimentally demonstrated. A cube root dependence on induced perturbations in the refractive index was shown thanks to the coupling between three resonators. A 3 × 3 Hamiltonian was used to model the device, in which one resonator is lossy, another has gain, and the central one is neutral (see Figure 16a): in which +g or −g accounts for the gain or loss, respectively. This kind of Hamiltonian shows a dependence on the perturbation as ε 1/3 . In 2018, Zhao et al. proposed a coating of an optical EP structure for thermal sensing with a fine spatial resolution [91]. In particular, a three-layer structure of Au-PMMA-Au was deposited on a silica glass slide for engineering the thermo-sensitive glass slide at an EP (Figure 16b). In 2019, the EP of an optomechanical cavity was exploited to enhance the sensitivity of a mass sensor [92]. The gain or loss was engineered by driving the cavity with a blue-detuned or red-detuned laser, respectively. A magnetometer with exceptional sensitivity, using cavity magnon polaritons with PT symmetry, was proposed in [93]. A third-order EP leads to an estimated magnetic sensitivity of 10 −15 T Hz −1/2 in the strong coupling region, which is two orders of magnitude higher than that of the state-of-the-art magnetoelectric sensor. higher than that of the state-of-the-art magnetoelectric sensor. In [94], plasmonic EPs are demonstrated, which are based on the hybridization of detuned resonances in multilayered plasmonic structures. The reaching of a critical complex coupling rate between nanoantenna arrays (Figure 16c shows one of the proposed configurations) results in the simultaneous coalescence of the resonances and loss rates, thus allowing the reaching of the EP. This setup is proposed for sensing of antiimmunoglobulin G, the most abundant immunoglobulin isotype in human serum. Figure 16. (a) A parity-time-symmetric ternary micro-ring system with equidistantly spaced cavities (proposed in [90]). (b) Schematic drawing of the thermo-sensitive glass slide engineered at an EP. A three-layer structure of Au-PMMA-Au is deposited on a silica glass slide (proposed in [91]). (c) One of the configurations of the plasmonic structure (repeated periodically) made of two optically dissimilar plasmonic resonators array with detuned resonances. The detuning can be implemented either using structures of distinct size or using identical resonators in distinct optical environments (architecture proposed in [94]). In 2020, an ultrasensitive stress sensor was proposed [95], with parity-timesymmetric cavities. In particular, one cavity is embedded on a cantilever beam, serving as the sensing element. The authors claim a sensitivity enhancement of about 8 orders of magnitude at a stress range between 0 and 1 nPa. In [96], it has been demonstrated with a Brillouin microresonator that two nondegenerate EPs behave anisotropically; i.e., when approached from both directions, the sensitivities to the deviations of the two supermodes function differently. This has been proposed to be used for realizing a bi-scale supersensitive optical sensor that can detect particles of different sizes at the same time. In [97], a label-free biosensor for detecting low-concentration analytes has been proposed, via coupled resonant optical-tunnelling resonators (one lossy cavity and one sensing cavity). The behaviour around the EP is controlled by adjusting the separation between the resonators (thus the coupling strength). The surface of the sensing cavity is biofunctionalized in advance to bind specific target analytes, which perturb the EP causing an additional absorption in the sensing cavity. The authors evaluated the effect of the presence of the analyte as a change in the imaginary part unit of refractive index (IP): a sensitivity of 17.12 nm/IP was demonstrated, with a detection limit of 4.2 × 10 −8 IP, corresponding to 1.78 ng for sensing of carcino-embryonic antigen (CEA). Figure 16. (a) A parity-time-symmetric ternary micro-ring system with equidistantly spaced cavities (proposed in [90]). (b) Schematic drawing of the thermo-sensitive glass slide engineered at an EP. A three-layer structure of Au-PMMA-Au is deposited on a silica glass slide (proposed in [91]). (c) One of the configurations of the plasmonic structure (repeated periodically) made of two optically dissimilar plasmonic resonators array with detuned resonances. The detuning can be implemented either using structures of distinct size or using identical resonators in distinct optical environments (architecture proposed in [94]). In [94], plasmonic EPs are demonstrated, which are based on the hybridization of detuned resonances in multilayered plasmonic structures. The reaching of a critical complex coupling rate between nanoantenna arrays (Figure 16c shows one of the proposed configurations) results in the simultaneous coalescence of the resonances and loss rates, thus allowing the reaching of the EP. This setup is proposed for sensing of anti-immunoglobulin G, the most abundant immunoglobulin isotype in human serum. In 2020, an ultrasensitive stress sensor was proposed [95], with parity-time-symmetric cavities. In particular, one cavity is embedded on a cantilever beam, serving as the sensing element. The authors claim a sensitivity enhancement of about 8 orders of magnitude at a stress range between 0 and 1 nPa. In [96], it has been demonstrated with a Brillouin microresonator that two nondegenerate EPs behave anisotropically; i.e., when approached from both directions, the sensitivities to the deviations of the two supermodes function differently. This has been proposed to be used for realizing a bi-scale supersensitive optical sensor that can detect particles of different sizes at the same time. In [97], a label-free biosensor for detecting low-concentration analytes has been proposed, via coupled resonant optical-tunnelling resonators (one lossy cavity and one sensing cavity). The behaviour around the EP is controlled by adjusting the separation between the resonators (thus the coupling strength). The surface of the sensing cavity is biofunctionalized in advance to bind specific target analytes, which perturb the EP causing an additional absorption in the sensing cavity. The authors evaluated the effect of the presence of the analyte as a change in the imaginary part unit of refractive index (IP): a sensitivity of 17.12 nm/IP was demonstrated, with a detection limit of 4.2 × 10 −8 IP, corresponding to 1.78 ng for sensing of carcino-embryonic antigen (CEA). A gas sensor with ultrahigh sensitivity has been shown in [98]: the transverse displacement induced by the photonic spin Hall effect (PSHE) is sensitive to the variation in refractive index in gas media, especially in the proximity of an EP. The sensitivity of the gas sensing can reach 10 −6 RIU µm −1 , if the in-plane wavevector component of the probe Gaussian light is reduced. In [56], coupled cavities at the EP have been studied for refractive index and absorption sensing. In [74], two counterpropagating modes in a fibre-ring cavity with different losses were used to enhance the sensitivity of a fibre sensor. The differential roundtrip loss is induced by using an extra fibre ring with an optical isolator. An erbium-doped fibre is used to narrow the linewidth (pushing the eigenfrequencies near to the real axis). Sensing Applications of Non-Hermitian Electronics Non-Hermitian sensing has been exploited also in electronics for different kinds of sensing. Here, some recent outstanding advances in non-Hermitian sensing in electronics will be illustrated. Generalized PT Symmetry for Enhanced Sensor Telemetry A generalized condition for PT symmetry has been shown in [99]. In particular, the theory of the so-called isospectral parity-time reciprocal scaling (PTX) symmetry has been developed. As shown, PT symmetry is achieved with perfectly balanced gain and loss, corresponding to the negative and positive resistors in the coupled-oscillators electric system. This results in sharp and deep resonances, with improved spectral resolution and modulation depth for sensing. Sometimes, however, practical implementations for the sensor telemetry may encounter difficulties in achieving an exact conjugate impedance profile [99], due, for example, to space limitations: when using miniaturized MEMS implanted sensors, the inductance of the sensor's microcoil, L S , is usually smaller than the one of the reader's coil, L R . In principle, downscaling the reader coil can match L R to L S ; however, this would lead to a reduced mutual inductive coupling and degrade the performance of the sensor. This is why the authors introduced an extra degree of freedom to enable the arbitrary scaling of the coil inductance and other parameters, to improve the wireless interrogation. The added degree of freedom is the parameter x, in the PTX configuration shown in Figure 17, where x is the scaling factor of the PTX symmetry: for x = 1, the system degenerates in the classical electronic PT-symmetric circuit. A gas sensor with ultrahigh sensitivity has been shown in [98]: the transverse displacement induced by the photonic spin Hall effect (PSHE) is sensitive to the variation in refractive index in gas media, especially in the proximity of an EP. The sensitivity of the gas sensing can reach 10 −6 RIU µm −1 , if the in-plane wavevector component of the probe Gaussian light is reduced. In [56], coupled cavities at the EP have been studied for refractive index and absorption sensing. In [74], two counterpropagating modes in a fibre-ring cavity with different losses were used to enhance the sensitivity of a fibre sensor. The differential roundtrip loss is induced by using an extra fibre ring with an optical isolator. An erbium-doped fibre is used to narrow the linewidth (pushing the eigenfrequencies near to the real axis). Sensing Applications of Non-Hermitian Electronics Non-Hermitian sensing has been exploited also in electronics for different kinds of sensing. Here, some recent outstanding advances in non-Hermitian sensing in electronics will be illustrated. Generalized PT Symmetry for Enhanced Sensor Telemetry A generalized condition for PT symmetry has been shown in [99]. In particular, the theory of the so-called isospectral parity-time reciprocal scaling (PTX) symmetry has been developed. As shown, PT symmetry is achieved with perfectly balanced gain and loss, corresponding to the negative and positive resistors in the coupled-oscillators electric system. This results in sharp and deep resonances, with improved spectral resolution and modulation depth for sensing. Sometimes, however, practical implementations for the sensor telemetry may encounter difficulties in achieving an exact conjugate impedance profile [99], due, for example, to space limitations: when using miniaturized MEMS implanted sensors, the inductance of the sensor's microcoil, LS, is usually smaller than the one of the reader's coil, LR. In principle, downscaling the reader coil can match LR to LS; however, this would lead to a reduced mutual inductive coupling and degrade the performance of the sensor. This is why the authors introduced an extra degree of freedom to enable the arbitrary scaling of the coil inductance and other parameters, to improve the wireless interrogation. The added degree of freedom is the parameter x, in the PTX configuration shown in Figure 17, where x is the scaling factor of the PTX symmetry: for x = 1, the system degenerates in the classical electronic PT-symmetric circuit. Figure 17. Equivalent circuit model for the PTX-symmetric telemetric sensor. The active reader interrogates the sensor via magnetic coupling. The parameter x is the scaling parameter. For x = 1, the PTX degenerates into PT symmetry (architecture proposed in [99]). The source is modelled via an equivalent impedance −Z0. The voltage impedance-controlled converter. The parameter x is the scaling parameter. For x = 1, the PTX degenerates into PT symmetry (architecture proposed in [99]). The source is modelled via an equivalent impedance −Z 0 . The voltage impedance-controlled converter. Implantable Microsensors In biomedicine, some implanted electronic sensors are based on resonant inductorcapacitor (LC) circuits that monitors internal physiological states. However, the sensitivity of the wireless interrogation technique is often low, thus limiting the possibility of realizing minimally invasive devices for continuous physiological monitoring. In [77], the authors proposed the readout of an implantable microsensors using a wireless system locked to an EP. The coupling strength κ between the implanted sensor (see Figure 18a) and the reader represents the parameter to be sensed. The idea of using the passive LC circuit as the lossy part of a PT-symmetric sensor is not the best choice for this kind of setup, because the sensitivity of a PT-symmetric device is null for κ = 0 (see Figure 18b). Implantable Microsensors In biomedicine, some implanted electronic sensors are based on resonant inductorcapacitor (LC) circuits that monitors internal physiological states. However, the sensitivity of the wireless interrogation technique is often low, thus limiting the possibility of realizing minimally invasive devices for continuous physiological monitoring. In [77], the authors proposed the readout of an implantable microsensors using a wireless system locked to an EP. The coupling strength κ between the implanted sensor (see Figure 18a) and the reader represents the parameter to be sensed. The idea of using the passive LC circuit as the lossy part of a PT-symmetric sensor is not the best choice for this kind of setup, because the sensitivity of a PT-symmetric device is null for κ = 0 (see Figure 18b). The authors demonstrated that the spectral response, Δω, of the reader biased at an EP follows a dependency of Δω ∝ κ 2/3 , which greatly amplifies its response to a weakly coupled sensor. The coupled mode equations describing the combination of the sensor and the PTsymmetric reader are given by where subscript 1 and 2 refer to the gain resonator and the loss resonator of the PT symmetric sensor, and subscript s refers to the implanted sensor resonator. The term aj (with j = 1, 2, s) represents the mode amplitudes and ωj (with j = 1, 2, s) the resonant frequencies of the three resonators. Moreover, g1 indicates the gain rate of the first resonator, γ2 indicates the loss rate of the second resonator of the reader, γs indicates the loss rate of the sensor resonator, and µ is the coupling strength between the two resonators of the reader. To be PT symmetric, the reader requires ω1 = ω2 = ω0, and g1 = γ2. With the Newton-Puiseux series, the authors found the eigenfrequencies of the system depart from the central frequency ω0, with a dependence proportional to κ 2/3 . This configuration has been experimentally proven to have a sensitivity 3.2 times the limit encountered by existing schemes. Non-Hermitian Accelerometer In [100], a preprint, an electromechanical accelerometer is demonstrated; it is a variable capacitor, CC, with one plate connected to a test mass that senses the acceleration. The electrical scheme is represented in Figure 19. The authors demonstrated that, thanks to the coupling CE, the noise, due to collapse of the eigenvectors (demonstrated for the Brillouin gyroscope in [62]), is mitigated, exploiting the detuning from a transmission peak degeneracy (TPD) when the sensor is weakly coupled to transmission lines. The The authors demonstrated that the spectral response, ∆ω, of the reader biased at an EP follows a dependency of ∆ω ∝ κ 2/3 , which greatly amplifies its response to a weakly coupled sensor. The coupled mode equations describing the combination of the sensor and the PTsymmetric reader are given by d dt where subscript 1 and 2 refer to the gain resonator and the loss resonator of the PT symmetric sensor, and subscript s refers to the implanted sensor resonator. The term a j (with j = 1, 2, s) represents the mode amplitudes and ω j (with j = 1, 2, s) the resonant frequencies of the three resonators. Moreover, g 1 indicates the gain rate of the first resonator, γ 2 indicates the loss rate of the second resonator of the reader, γ s indicates the loss rate of the sensor resonator, and µ is the coupling strength between the two resonators of the reader. With the Newton-Puiseux series, the authors found the eigenfrequencies of the system depart from the central frequency ω 0 , with a dependence proportional to κ 2/3 . This configuration has been experimentally proven to have a sensitivity 3.2 times the limit encountered by existing schemes. Non-Hermitian Accelerometer In [100], a preprint, an electromechanical accelerometer is demonstrated; it is a variable capacitor, C C , with one plate connected to a test mass that senses the acceleration. The electrical scheme is represented in Figure 19. The authors demonstrated that, thanks to the coupling C E , the noise, due to collapse of the eigenvectors (demonstrated for the Brillouin gyroscope in [62]), is mitigated, exploiting the detuning from a transmission peak degeneracy (TPD) when the sensor is weakly coupled to transmission lines. The device shows a three-fold signal-to-noise enhancement with respect to configurations working away from TPD. device shows a three-fold signal-to-noise enhancement with respect to configurations working away from TPD. Figure 19. Schematic of the PT-symmetric electromechanical accelerometer proposed in [100]. The PT-symmetric circuit coupled to the transmission line with capacitors CE. The capacitance CC realizes the coupling between the two RLC resonators. In particular, the authors introduce a normalized Hamiltonian for the PT-symmetric isolated system (in the absence of CE) (here adapted to the chosen convention for the timeharmonic evolution): where γ0 is the normalized gain (loss) in each resonator and κ0 is the normalized coupling strength between the resonators. As required by the condition of the EP of the PTsymmetric Hamiltonian, κ0 = γ0. Under this hypothesis, including the effect of perturbation ε (variation of the capacitance, due to acceleration) and the effect of the linewidth broadening (γe) due to the coupling with the transmission line, the effective Hamiltonian is modified into = −1 + ε − γ + γ γ + ε γ + ε −1 + ε − γ − γ . The frequencies associated to the transmission peaks have a coalescence point at ε = −γ + γ + γ . So, the maximum sensitivity is around ε = εTPD. Around the transmission peak degeneracy (TPD) point, the bi-orthogonal basis of the effective Hamiltonian does not collapse and the Petermann factor does not diverge. Since the Petermann factor was the source of the sensitivity limitations in the Brillouin ring laser gyroscope in [62], the separation between the coalescence of the eigenmodes and the coalescence of the measurable frequencies associated with the transmission peaks overcomes the limitations of the Petermann factor. Other Sensing Applications of Non-Hermitian Electronics In [101], capacitive sensing for different applications (microfluidic flow sensor, pressure sensor, accelerometer) is performed by implementing a sixth-order EP with non- Figure 19. Schematic of the PT-symmetric electromechanical accelerometer proposed in [100]. The PT-symmetric circuit coupled to the transmission line with capacitors C E . The capacitance C C realizes the coupling between the two RLC resonators. In particular, the authors introduce a normalized Hamiltonian for the PT-symmetric isolated system (in the absence of C E ) (here adapted to the chosen convention for the time-harmonic evolution): where γ 0 is the normalized gain (loss) in each resonator and κ 0 is the normalized coupling strength between the resonators. As required by the condition of the EP of the PT-symmetric Hamiltonian, κ 0 = γ 0 . Under this hypothesis, including the effect of perturbation ε (variation of the capacitance, due to acceleration) and the effect of the linewidth broadening (γ e ) due to the coupling with the transmission line, the effective Hamiltonian is modified into Looking for the transmittance of the system and considering the frequencies associated ( ω ± ) with the transmission peaks (physical observables, that differ from eigenfrequencies), they obtained The frequencies associated to the transmission peaks have a coalescence point at So, the maximum sensitivity is around ε = ε TPD . Around the transmission peak degeneracy (TPD) point, the bi-orthogonal basis of the effective Hamiltonian does not collapse and the Petermann factor does not diverge. Since the Petermann factor was the source of the sensitivity limitations in the Brillouin ring laser gyroscope in [62], the separation between the coalescence of the eigenmodes and the coalescence of the measurable frequencies associated with the transmission peaks overcomes the limitations of the Petermann factor. Other Sensing Applications of Non-Hermitian Electronics In [101], capacitive sensing for different applications (microfluidic flow sensor, pressure sensor, accelerometer) is performed by implementing a sixth-order EP with nondegraded thermal noise performance. A capacitive coupling channel is used as a sensing platform to achieve an enhanced resonance shift proportional to the fourth-order root of the perturbation strength, maintaining a high resolution for weak perturbation. The thermal noise is mitigated to a level comparable to the Hermitian counterpart, despite the highly noisy gain and loss elements. In [102], an ultra-sensitive passive wireless sensor is demonstrated, exploiting highorder EP for weak coupling detection. In particular, a spectral splitting proportional to the cube root of the coupling between two wirelessly coupled electronic RLC resonators is demonstrated. In [103], a glucose sensor with enhanced sensitivity has been proposed, using a PTsymmetric system that sandwiches the tissue sample under analysis. The glucose level changes within the skin are sensed by measuring the frequency shift in the electromagnetic resonance induced in the PT-symmetric system. The skin is modelled as a transmission line. Conclusions In this work, recent progress in EP-based sensors has been reviewed, with a particular focus on implementations of non-Hermitian Hamiltonians in optics and electronics. A theoretical overview of the non-Hermitian Hamiltonian is presented, in order to be a helpful starting point for the conceptualization and design of non-Hermitian sensors. Several experimental works were then shown, demonstrating the real advantage of non-Hermitian sensing with respect to classical sensing principles, in several fields of sensing (especially, but not only, angular velocity and particle sensing in optics and wireless telemetry in electronics). The debate on the influence of noise on EPs is still open; however, new techniques to avoid the negative effect of noise are now under research. The concept of an exceptional surface has been introduced in optics to make the sensor immune to unwanted external perturbations, and new configurations have been proposed and a design at the transmission peak degeneracy point has been recently introduced in electronics to prevent the coalescence of eigenmodes at the EP, thus improving its robustness.
18,494
2022-05-24T00:00:00.000
[ "Physics" ]
Assessment of Uplink Massive MIMO in Scattering Environment This paper investigates the performance of massive MIMO systems under the effect of multipath propagation environment. Linear Minimum Mean Squared Error (MMSE) is considered to assess the performance of BPBK/OFDM based uplink massive MIMO transmission. Bit Error Rate (BER) and channel capacity in Non Line Of Site (NLOS) multipath fading environment are presented. The results show a correlation between the number of antennas and the performance of the system. Keywords-massive MIMO; BER; BPSK; OFDM; MMSE; multipath INTRODUCTION MIMO technology is one of the essential factors that lead the development of wireless communications over the last couple of decades [1]. Besides being extremely bandwidth efficient, recent MIMO based communication systems are capable of providing reliable transmission in the order of several Giga bits per second [2]. Advancements in this area have been achieved through various MIMO detection techniques that offer improved performance and reduced complexity [3][4][5]. Multiuser MIMO (MU-MIMO) is a class of the MIMO technology in which a group of users are able to communicate wirelessly with one or more antennas and Space Division Multiple Access (SDMA) is exploited to transmit different signals on the same band. Due to its tremendous advantages, MU-MIMO has been adopted in most of the wireless communication standards during the last decade [6]. It has been shown that there is a correlation between the number of users that can be served concurrently and the number of antennas in the Base Station (BS). As a result, more users can communicate using the same time and frequency resources when the BS is equipped with a large number of antennas [7]. Recently, the concept of MU-MIMO has been developed to massive MIMO where antenna arrays can be used at the BS [8,9]. This means that the number of antennas at the BS is very large compared to the number of antenna users within a cell. This new technology has been adopted in the 5G NR (New Radio) which has been developed by the 3 rd Generation Partnership Project (3GPP) for 5G mobile networks [10]. Its characteristics eliminate the problem of small scale fading and the multiuser interference which makes linear detection techniques optimal. As a result, energy and spectral efficiencies are substantially increased [9]. Moreover, massive MIMO leads to huge capacity improvement that could reach up to a 50 fold increase [11,12]. It has been shown that increasing the number of antennas in the BS enhances link reliability and improves data rate due to the increased number of possible paths for the signal [13]. It also enables the targeted use of the spectrum via the beamforming technology which results in robustness against interference and jamming [14,15]. This paper investigates the performance of massive MIMO systems under scattering multipath environment. An OFDMbased uplink transmission case where the BS is equipped with a number of antennas larger than the number of users is considered. II. SYSTEM MODEL The main characteristic of massive MIMO is that the BS is equipped with antenna arrays which allow serving a large number of users in the same frequency band. The high multiplexing gain ensures reliable communication with linear processing. The system model of the uplink MU-Massive MIMO considered in this paper is shown in Figure.1. N single antenna users are served with a BS equipped with K antennas in the single cell scenario. Let ݄ , indicate the uplink channel coefficient between the n-th user and the k-th BS antenna [4]. where ݃ , and ݀ denote small scale fading and large-scale fading coefficients respectively. While the large-scale fading coefficients depend on the user's position, users are assumed to have independent small-scale fading. As a result, the channel matrix is given by : Therefore: where: Therefore, ∈ ԧ ൈଵ represents the uplink received signal which is expressed as: where ߪ represents the transmit power, ‫ܠ‬ ∈ ԧ ேൈଵ denotes the vector of the uplink transmitted signal from the users, ۶ ∈ ԧ ൈே is the uplink channel matrix in (2) and ‫ܖ‬ ∈ ԧ ൈଵ is a Gaussian distributed noise vector with zero mean and unit variance. Therefore, the n-th user is transmitting the ‫ݔ‬ sample which represents the n-th element of ‫ܠ‬ ൌ ሾ‫ݔ‬ ଵ … ‫ݔ‬ ሿ . In this paper, data symbols needed to form the OFDM blocks are randomly selected from the BPSK alphabet with a normalized energy. While the coefficients of the small scale fading for different users are assumed to be independent and identically distributed, the channel vectors of the users become asymptotically orthogonal when the number of antennas at the BS grows to a very large value [16] and: where ۶ ۶ is the transpose conjugate (Hermitian) of the channel matrix. The experimental measurements in [17] prove the favorable propagation characteristics of massive MIMO which supports the assumption made in (7). Therefore, the uplink channel capacity of the massive MIMO system is: III. SIGNAL DETECTION Data streams which represent the different signals sent by the various users in the massive MIMO system must be separated, which can be done by a number of detection techniques. One of these is the Maximum Likelihood (ML) detector. The problem with this kind of detection is the complexity which grows exponentially as the number of antennas increases. Hence, it is not practical for massive MIMO [18]. MMSE, which is a linear sub-optimal detector, is an alternative with low complexity and is used in this paper. When the number of BS antennas is much larger than the number of users, asymptotic capacity can be achieved using the linear MMSE detector. The received uplink MIMO signal can be demultiplexed at the BS as: where r is an N×1 vector that contains the data sent by N users and U is the K×N linear detection matrix given in (10) which represents the linear MMSE estimator: where ߩ ଶ and ߩ ௫ represent the variances of the signal and the noise respectively. Table I shows the parameters that were used to assess the performance of the uplink massive MIMO system according to the system model and the signal detection described above. A large number of flat fading channels and BPSK/OFDM realizations are generated when Monte-Carlo simulation that includes error counting method was exploited. MATLAB simulation of the uplink BPSK/OFDM block transmission was conducted according the flowchart diagram shown in Figure 2. www.etasr.com Alzamil: Assessment of Uplink Massive MIMO in Scattering Environment After setting the system parameters, the random binary data are transformed to parallel data streams. Data frames that consist of BPSK symbols go through an IFFT with size = 2048 where cyclic prefix size is 128 in order to produce the OFDM modulated signal. This signal goes through the channel which represents the massive MU-MIMO N×K as shown in (6). Within the massive MU-MIMO N×K flat Rayleigh fading channel, each path that links the transmitter to the receiver antenna is modeled as an FIR filter where the complex coefficient is Gaussian with zero mean and unit variance. The received signal is then OFDM demodulated, were the cyclic prefix is removed and FFT is applied. MMSE detectors separate the received signal into different data streams which are BPSK demodulated. Unlike the Zero Forcing (ZF) detector that only minimizes the interference without reducing noise, the linear MMSE detector selects the U that minimizes the mean squared error ݁ ൌ ‫܃‖‪ሾ‬ܧ‬ ୌ ‫ܡ‬ െ ‫‖ܠ‬ ଶ ሿ and hence achieves an optimal balance between noise reduction and interference cancelation. However, the computational complexity which results in time complexity, of the MMSE linear detectors is ܱሺKN KN2 N3ሻ [11]. Figure 3. It is observed that BER reduces as the number of users increases. The improvement is obvious when the entire E b /N o range is considered. This results in a gap in the BER of different numbers of users. Figure 4 shows the BER performance of the same system. However, the number of users N is fixed at 10 while the number of antennas K varies. Increasing the number of antennas K at the BS results in enhanced transmission. Thus, the BER decreases because the multipath fading and multi-user interference effects are almost eliminated when the number of antennas at the BS is much larger than the number of users: K>>N [19]. One of the advantages of massive MIMO is that it provides service to a large number of users at the same time and its performance can be affected by the number of users [17]. The impact of the number of users on the channel capacity is illustrated in Figure 5. The simulation results show a positive correlation between the number of users and the capacity of the channel: the uplink channel capacity increases as the number of users increases. Therefore, spectral efficiency is proportional to the number of users within the cell in massive MIMO systems. To further study the effect of the number of users on the uplink channel capacity in massive MIMO systems, BSs with www.etasr.com Alzamil: Assessment of Uplink Massive MIMO in Scattering Environment 50 and 100 antennas were considered. The optimal capacities of the system depend on the number of antennas in the BS and the number of the users in the cell as shown in Figure 6. When the BS is equipped with K=50 antennas, the capacity gradually grows until 40 active users are reached. After that, it starts degrading as the number of users increases. If the BS is equipped with K=100 antennas, the maximum capacity is around 125bits/s/Hz at 64 users. Therefore, the BS with a large number of antennas outperforms the one with a smaller number of antennas. Different estimation techniques have been used in [9] to investigate the influence of the number of active users on the channel capacity and their results show that channel capacity starts degrading after a certain point. VI. CONCLUSION Massive MIMO is a wireless communication technology with many issues and aspects that must be investigated The performance of a massive MIMO was studied in this paper. The studied scenario consists of a single cell that contains multiple users served through a single BS where linear detection techniques that process the uplink OFDM transmission were used. The BER results confirm the positive impact of making the number of receive antennas at the BS much larger than the transmit antennas which represent the number of users. Hence, increasing the number of antennas in the BS is recommended if the number of users is large. It was shown that massive MIMO improves spectral efficiency. It has been shown that the channel capacity increases by increasing the number of users, the number of BS antennas or both.
2,478
2020-10-26T00:00:00.000
[ "Engineering", "Physics" ]
Quantification of muco-obstructive lung disease variability in mice via laboratory X-ray velocimetry To effectively diagnose, monitor and treat respiratory disease clinicians should be able to accurately assess the spatial distribution of airflow across the fine structure of lung. This capability would enable any decline or improvement in health to be located and measured, allowing improved treatment options to be designed. Current lung function assessment methods have many limitations, including the inability to accurately localise the origin of global changes within the lung. However, X-ray velocimetry (XV) has recently been demonstrated to be a sophisticated and non-invasive lung function measurement tool that is able to display the full dynamics of airflow throughout the lung over the natural breathing cycle. In this study we present two developments in XV analysis. Firstly, we show the ability of laboratory-based XV to detect the patchy nature of cystic fibrosis (CF)-like disease in β-ENaC mice. Secondly, we present a technique for numerical quantification of CF-like disease in mice that can delineate between two major modes of disease symptoms. We propose this analytical model as a simple, easy-to-interpret approach, and one capable of being readily applied to large quantities of data generated in XV imaging. Together these advances show the power of XV for assessing local airflow changes. We propose that XV should be considered as a novel lung function measurement tool for lung therapeutics development in small animal models, for CF and for other muco-obstructive diseases. Scientific RepoRtS | (2020) 10:10859 | https://doi.org/10.1038/s41598-020-67633-y www.nature.com/scientificreports/ applied in animals using current gold-standard tools such as the flexiVent (Scireq, Canada), to measure some of the effects of pharmaceutical and genetic therapeutics in animal models 4 . Regardless of the measurements that can be provided by these common measures of lung function, they cannot accurately localise the changes in airflow that are caused by structural abnormalities within the lung. To identify structural lung disease, image-based assessments such as CT and MRI are commonly used in humans and animal models. CT provides excellent structural information, and can therefore be used to monitor for structural abnormalities from disease progression, or morphological changes produced by new drug therapies. However, preventative treatment programs should ideally be able to intervene before disease establishes and progresses to the point at which it produces structural changes. Similarly, therapeutics assessments should be able to detect early functional changes 4 . MRI has the advantage of being able to simultaneously observe both lung function and structure without exposing the subject to ionising radiation. However, progress in MRI research has lagged behind x-ray-based methods, most likely because spatial resolution is poor and the properties of the lung-particularly the low proton density, present since the lung is comprised primarily of air-make it less appropriate for MRI 5 . Nonetheless, recent innovations such as hyperpolarised gas and ultrashort echo time imaging continue to advance human chest MRI research 6,7 . Methods that attempt to infer lung function from CT and MRI images have previously been reported in humans and animals. Spirometry-guided CT for volume control during imaging has potential benefits but has not been implemented on a large scale and requires extensive patient training to administer 5 . Scoring systems that use software analyses to validate outcome measures from chest CTs have been developed, including the PRAGMA-CF assessment protocol for monitoring early-stage CF in children 8 . These methods must still look to repeatability and standardization of results, as well as restrictions associated with radiation dose from repeated chest CTs 9 . Other developments that show promise include registration-based techniques for measuring lung aeration [10][11][12] , and the use of contrast agents to render lung content directly visible 13,14 . Tracking X-ray microscopy (TrXM) takes advantage of a sophisticated synchrotron-based X-ray imaging system to directly image mouse alveoli during respiration 15 . Despite the availability of techniques that assess either lung function or lung structure, none of these are able to simultaneously identify the origin of changes in function, and evaluate their heterogeneity. Abnormal lung motion during breathing has been demonstrated to be an indicator of disease 16 . Our previous research has developed a method that can rapidly capture the motion of the natural breathing cycle at a high spatial and temporal resolution, without the use of a contrast agent. To do this, propagation-based phase-contrast X-ray imaging (PCXI) was utilised. Since PCXI does not rely solely on absorption of X-rays by matter-but rather the diffraction of rays at material interfaces enhanced by the propagation of x-rays through free space-it can reduce the radiation dose and health risk associated with conventional chest CT scans [17][18][19] . PCXI can be combined with tomography to create detailed three-dimensional reconstructions of the fine structures in the lungs [20][21][22][23] . Using multiple PCXI images acquired throughout the respiratory cycle, Fouras et al. 16 applied particle-image velocimetry to determine the speed and direction of lung motion in three dimensions throughout the respiratory cycle. The resulting regional maps of lung tissue motion can be used to detect subtle and non-uniform lung disease [24][25][26] . This high-speed PCXI acquisition and post-processing analysis is termed X-ray velocimetry (XV). The key difference between standard structural imaging modalities and XV is that XV assesses the dynamics of the lung tissue movement throughout the breath in order to extract measures of tissue expansion. The result is a detailed ventilation map of the lung, which non-invasively enables the volume of air that flows through each branch of the lung tree to be calculated 21 . The potential value of XV for inferring lung function is shown by the characterisation of CF-like lung disease in small laboratory animals at high resolution using a synchrotron-based X-ray source 24 , where the spatial and temporal variability in airflow throughout the lung was assessed. Recently, Murrie et al. reported the proofof-principle translation of XV to a laboratory-based source 25 . They showed that despite a loss of spatial and temporal coherence-which is inevitable when moving from a high-brightness synchrotron to a compact light source-XV data can still be extracted from the resulting images. In the present study, the same laboratory-based X-ray source was used to perform XV on a cohort of β-ENaC mice, a model of CF-like lung disease 26 . This data was then used to develop novel numerical methods that delineate symptoms of this disease. The success of XV lies in its ability to draw reliable and meaningful quantitative measures, and this study shows how this can be accomplished. In the future these techniques can be expected to be applied to the numerical characterization of CF lung disease in larger cohorts and other CF animal models. These methods allow analyses to be applied in a straightforward fashion and with minimal manual processing, to enable ongoing study and development of the treatment of CF and other respiratory diseases. Results and analysis Here methods for extracting quantitative measures for lung health from the XV data are presented. These can be readily applied to large datasets with minimal manual intervention. Regional distribution of lung function. Figure 1 maps the regional expansion of the lungs at the peak of the breath for both a β-ENaC mouse and its healthy littermate, as measured by XV. The expansion of each region of interest (ROI), defined by the XV voxel, is given as a fractional increase over the course of the breath, i.e. (change in volume of ROI)/(volume of ROI). The resulting measurement, fractional expansion, is a unitless quantity. The XV expansion data shown in Fig. 1 clearly allows the location of the airflow deficits to be determined within this plane inside the lung (see red arrow in panel 1a). In order to quantify differences throughout the entire lung volume, methods of calculating the distribution of tissue expansion have been developed here. An Scientific RepoRtS | (2020) 10:10859 | https://doi.org/10.1038/s41598-020-67633-y www.nature.com/scientificreports/ example of this approach is shown in Fig. 2a, which shows a histogram calculated from the fractional tissue displacement across the volume of the lung for each mouse shown in Fig. 1. The measurements for the β-ENaC mouse (from Fig. 1a) are shown in red and its healthy littermate (from Fig. 1b) in blue. The interquartile ranges (IQR) of each histogram are indicated in the graph. To adjust for variations in lung size across the cohort, the area under each histogram has been normalised to 1. In a homogeneously ventilated lung, the range of values for fractional tissue displacement should be narrow. In circumstances where there is heterogeneity due to a 'patchy' disease such as cystic fibrosis, a wider range of values (and therefore a higher IQR) is expected due to varied areas of poor ventilation and air trapping from mucus obstruction, as shown in Fig. 2a. Also apparent in the histogram from this particular β-ENaC mouse is the bimodal peak, where the lower peak represents regions of poor lung health (airflow), as indicated by the red arrow in Fig. 1a. This is likely to be caused by the presence of mucus obstruction which is a feature of this animal model 27,28 . Our previous synchrotron-based study 24 used www.nature.com/scientificreports/ histological sections to confirm that areas of reduced ventilation as measured by by XV analysis corresponded to mucus blockages in the bronchial tree 24 . Concurrently, the global (total) expansion of the lung at each point in the breath was calculated in order to demonstrate the superiority of XV in its ability to produce information about the spatial distribution of lung function at any point in the breath, rather than a single global measure of lung function averaged across the entire breath. The global expiratory time constant (τ) (defined previously in 24 ), is calculated as the time taken for 67% (∼ 1/ √ 2 ) of the air to be expired from the lungs. The volume in Fig. 2b is calculated from the total magnitude of the 3D tissue displacement vectors over the entire lung for each point in time, and is normalised according to the total lung volume, which was calculated by evaluating the volume of the mask used for tissue segmentation (see "Image processing"). The purpose of this normalisation is to be able to express the volume of air breathed as a fraction of the total lung volume. In this example, the β-ENaC mouse has a fractional tidal volume that is lower than its healthy counterpart due to poorer expansion in some parts of the lung. Note that the measurements in Fig. 2b are expressions of the average health across the whole lung, without reference to the local distribution across the lung, and is analogous to measures such as FEV 1 . In contrast, the fractional expansion histogram (Fig. 2a) contains many spatially-separated measurements for the lung at the peak of the breath and is designed to visualise the airflow heterogeneity in the presence of this muco-obstructive disease. Quantifying distribution of disease. As with human CF lung disease, there can be substantial variability in the severity and location of muco-obstructive disease between individual β-ENaC mice. Various presentations of CF-like disease have been categorised using the expansion histograms generated from the XV tissuedisplacement calculations, by assigning numerical quantities to characterise their shape. In the implementation presented here a simple least-squares fit for a Gaussian distribution was adopted, and the statistical moments of the fitted curve have been used to numerically characterise the profile of the histogram and relate them to symptoms of disease. Figure 3 shows the histogram of nine different animals each with a score for two properties that describe how lung airflow is distributed throughout the lung. The measured local expansion across the lung, www.nature.com/scientificreports/ which represents functional changes, may collect around certain values (clustered) or vary widely (heterogeneous), thus each mouse is scored for the presence of either heterogeneous disease (HD) and clustered disease (CD), as defined below. All of the plots show the histogram of the raw data in a grey unbroken line, with the black broken line showing the nonlinear regression line-of-best-fit calculated by applying a least squares approximation to a double-Gaussian curve. The goodness-of-fit is shown on each figure as R 2 . For this experiment, three categories of histogram profiles are seen: Healthy The healthy animal shown in Fig. 2a (blue histogram) shows a typical tall and narrow expansion histogram from a healthy mouse lung, showing expansion data that falls into a narrow range and which represents homogeneous expansion. Figure 3a-c show the expansion histograms from three healthy mice, showing the characteristic tall and narrow peak. Heterogeneous disease (HD) When CF-like disease is established across the lung, we expect XV to demonstrate characteristic heterogeneous lung function, with the disease presenting across the volume of the lung. In lieu of a tall and narrow peak where most of the lung expands evenly (by the same percentage), we expect a low and wide peak, with a larger range of values as some parts of the lung expand less than other parts. Each sample receives a score for patchiness, calculated using the term IQR/IQR L _ where IQR is the interquartile range of the histogram and IQR L _ is the average interquartile range for the littermate population. A healthy lung will have a value of close to 1 (see Fig. 3a-c), with the score increasing as lung health becomes more heterogeneous. Figure 3d-f show profiles for heterogeneous disease, each with a low and wide peak and higher HD values. Figure 3f has a heterogeneity level of 1.78, or rather 178% of healthy lung function variation. Clustered disease (CD) If airways are partially obstructed with mucus, poor ventilation and air trapping in the regions of the lung that receive air via those obstructed airways may result. In the end stages of this disease this may result in permanent damage to those regions due to atelectasis or bacterial infections that have been brought on by the presence of mucus. Where there is mucus preventing ventilation to a region of the lung, this region is less healthy than the rest of the lung, resulting in a clustered disease presentation. In the histogram of the expansion data, this is typically expressed as bimodality, or a split peak (as there could be two or more distinct regions). The diseased mouse in Fig. 2a shows such a peak. In order to determine whether or not the histogram of some expansion data possessed a second peak and how distinct the split between the peaks was, in this implementation a double-Gaussian distribution was consistently fit to each histogram. It was then possible to provide a score for bimodality using the term (μ 2 -μ 1 )/μ 2 ; where μ 1 and μ 2 are the values of the two modes, or rather the mean values of each peak in the double Gaussian distribution. Note that a double-Gaussian function was fitted to the data not because we assumed the data followed a normal distribution, but because the characteristics of a Gaussian distribution (smooth, tends to zero at ±∞ ) made it a suitable basis function, conveniently applied to our data to readily extract functional measures. Low levels of mucus plugging of the large airways are indicated by low CD quantities in Fig. 3a-c. Figure 3f shows levels of clustered disease approaching 0.34, where a second peak is beginning to separate itself from the central mode. Figure 3g-i show expansion histograms with distinct CD presentation. In Fig. 3h, the algorithm has failed to pick up on the peak that is indicated by the red arrow. This is likely because there are three different regions, not two. The middle region (blue arrow) is not well-separated from the central mode. While the label 'clustered' strictly refers to a grouping of expansion values within the histogram, it is typical that this is associated with a spatial grouping of low-expansion pixels within the lung image. Figure 4 shows coronal slices through the 3D expansion volumes that correspond to the histograms in Fig. 3. In Fig. 5 the HD and CD values at the peak of the breath are plotted against the tissue hysteresivity (measured by FOT). In Fig. 5a, XV measurements that suffer from excessive "heart blur" (shown in black-this imaging artefact is described in the section below) are separated from the rest of the data points, which are themselves divided into β-ENaC (red) mice and their healthy littermates (blue). The blue data points consistently present with an HD index ~ 1, a homogeneous lung function presentation, while the red data points show greater variation, which is consistent with the range of disease presentation as exemplified earlier in the "Results and analysis" section. It is not expected that the full complexity of respiratory disease can be captured with a single measurement; nonetheless this simple plot shows a correlation with FOT measurements. Figure 5b further shows the large range of disease presentations that are seen within the group of β-ENaC mice, in this case characterised using the cluster disease index (CD). More variation is expected amongst samples with CF-like disease, due, for example, to the variability in symptoms discussed in "Results: quantifying symptoms of disease". Heart blur. As described in "Methods" the image acquisition was coordinated with ventilation. The heart, however, beats independently of image acquisition, resulting in some lung motion blur. Due to the spatial and temporal resolution of the laboratory-based source, along with the small exposure times required to acquire XV images, this blurring could cause local failure of the XV algorithm when it attempts to faithfully capture speckle motion. Figure 6 shows examples from four separate animals. Dubsky et al. 29 used XV technology to show the manner in which cardiogenic oscillations affect airflow around the lung, by calculating the resulting tissue displacement. Their work showed that significant lung movement due to cardiac excursions is seen in the lower left regions of the lungs. This was also seen in our data, with the lower left region of the lungs (red arrows, Fig. 6) blurred in the CT image, preventing accurate measurement of tissue displacement using XV analysis. This resulted in unrealistic HD and CD values. Scientific RepoRtS | (2020) 10:10859 | https://doi.org/10.1038/s41598-020-67633-y www.nature.com/scientificreports/ A single CT slice from the first time point of the breath for mouse M11 above reveals this heart-motion associated blurriness (Fig. 7, red arrows). For comparison, the yellow arrows show a well-defined lung edge, situated away from the heart region. The methods for detecting heart blur issues numerically are presented below in the "Discussion". Discussion This study successfully shows that XV-when performed on a laboratory source 25 -can capture and differentiate a range of lung disease presentations seen in β-ENaC mice via the novel quantification methods described here. Currently, clinically feasible methods for the quantification of regional lung airflows and heterogeneity are sparse, but this study shows that it is possible to measure the dynamics of muco-obstructive disease presentations in large groups of animals using XV without the need for a synchrotron X-ray source. A significant outcome of this work is to provide the proof-of-principle for a straightforward means of numerically evaluating small animal lung health in the presence of symptoms of CF-like disease. It has been shown that the histogram for regional tissue displacement enabled by XV analysis provides information about the variability of airflow within the lung. Using the model described here, consisting of a score for both clustered disease (CD) and patchy/heterogeneous disease (HD), an overall presentation of CF-like disease can be quantified numerically. Combinations of these symptoms can account for a more rigorous approach to lung function evaluation than other techniques (e.g. a traditional lung function breath measurement such as spirometry, which measures breathing at the mouth and so averages airflow effects over the whole lung). When compared to FOT measurements for tissue hysteresivity, the HD score shows similarly low values for healthy animals. For diseased animals, there is greater variation-a complexity of presentation that can not be captured by a single measurement. In future studies this scoring system should be suited to analysis of the accuracy of this system in separating disease states in larger cohorts, by using Machine Learning analysis of clustering and association of data points (as with 30 ). Since the HD score is normalised to the mean variance of the healthy population, a new group of animals would have different baseline characteristics. Larger sample sizes would also enable us to use a Deep Learning model (see 31 ) to determine the thresholds for the HD and CD which correspond to muoco-obstructive disease at its various stages. Future investigations can also determine the optimal combination of Gaussian curves which provides the best measure of CD. The use of more powerful statistical techniques such as functional data analysis for describing the shape of the histogram 32 should also be investigated, along with an examination of how other respiratory conditions present using these numerical XV characterisations. The challenge of processing the volume of data that is required to develop this model presented here will be made easier with the inclusion of algorithms that can independently search data to detect symptoms of disease, without the need for user input. A number of image analysis algorithms are in development in order to accomplish this. For example, to orient each sample identically, we have implemented a mirrored symmetry approach on a CT image dataset that has been projected in the cranial/caudal direction to find the position of the spine, as shown in Fig. 8. The symmetry of the image is evaluated by comparing the features of the image with those of its reflection 33,34 . Ultimately, the image is rotated to a position whereby the spine lies at the bottom of the image, to facilitate automated cropping. Correcting orientation provides the means to associate data points with their physical location across the lung, allowing, for example, automatic identification of heart blur due to characteristic proximity to the heart 29 . Manual segmentation of the conducting airways or lung lobes, a preliminary step to XV, is time-consuming. Although providing impressive visualisation of lung function and highly accurate localisation of obstruction results, the inclusion of a manual step is not practical for large datasets. As a result, elements in image processing have been established that are designed to automatically segment the lungs from surrounding tissue, using thresholding, 3D morphological filters and continuity checks between slices. This approach will allow for the analysis of XV datasets to move from a new technique requiring some analysis training and effort into a highly accessible technique that can routinely evaluate large sets of data. To address the challenge of heart blur, Lovric et al. 35 have implemented a heartbeat-triggering gating technique into their image acquisition, although this is likely to be particularly challenging at the very high frame rates required for acquiring XV images at high ventilation rates. While experimental parameters have been optimised with the set-up for this experiment 36 , using a smaller spot size and higher power for the X-ray source, or a more sensitive detector, can increase the phase signal and reduce the amount of noise and blurriness in the X-ray images to improve our capabilities. www.nature.com/scientificreports/ Translating XV to a laboratory-based X-ray source creates challenges associated with lower spatial resolution than what is available with a synchrotron-based source. However, the use of magnification at the laboratory X-ray source enables the use of large, highly-efficient pixels that can reduce the associated radiation dose, which is a key step on the translational path to the clinic. Rather than imaging the fine structures of the lungs at the very high resolution (and hence high dose) required to directly observe the presence of disease, disease can be inferred by analysing the expansion maps produced by XV. The subject can consequently be exposed to lower amounts of ionising radiation than would be required if these blockages were to be resolved directly. Techniques such as FOT and FEV 1 provide single measures of lung function that are difficult to interpret alone, such that lung CT is often required to identify the structural abnormalities that might be the source of the change in lung function. The XV analysis technique presented here provides a more regional analysis of function across the lung and throughout the breath and provides the researcher, uniquely, the locations of airflow dysfunction Thus, XV has the potential to become a routine diagnostic tool to measure and monitor animal models, and ultimately humans, for improvements or declines in lung health. To translate XV to human use we have tomosynthesis experiments, large animal studies, and human clinical trials underway. Finally, XV will have applications beyond CF lung disease, and have value in other respiratory diseases such as asthma, COPD, emphysema and lung cancer, and in the development and assessment of respiratory therapeutics. Conclusions Here, laboratory-based XV has been applied to the evaluation of lung disease heterogeneity in a group of β-ENaC mice and their healthy littermates. We also present a novel, straightforward and intuitive method for quantifying the distribution of their muco-obstructive disease. Future automated approaches will allow the application of this model to large sets of data in order to observe lung function changes during treatment, to develop a robust numerical model for CF lung disease. The combination of X-ray velocimetry and progressive automation of the data analysis is an important step in the development of more sophisticated methods of lung function testing, and should assist research internationally to improve the health and lives of people with cystic fibrosis, and a range of other lung diseases. Methods Image acquisition. All images were acquired at the Laboratory for Dynamic Imaging at Monash University on a propagation-based PCXI set-up shown in Fig. 9, with the X-ray beam (Excillum D2+, Excillum AB, Kista, Sweden) produced by an electron beam striking a liquid-metal anode. A high power (265 W) was used over a small source (spot size: 60 μm × 15 μm) to generate the flux needed to achieve an imaging rate of 30 frames per second, and coherence sufficient to generate the phase contrast necessary across the lung volume 37 . With a con- www.nature.com/scientificreports/ ventional solid-metal anode, one would have to take care not to overheat the target while attempting to generate a higher flux. However, by using a liquid-metal-jet anode-pumped under high pressure to maintain a laminar flow-there was no concern of approaching the limits of heating the metal target in order to extract sufficiently high flux. The source-to-detector distance was fixed at 3,363 mm, with a maximum source-to-sample distance of 467 mm. The translation stage enabled the mice to be moved toward and away from the source to alter the zoom factor as required. To produce phase contrast in the resulting images, a minimum propagation (sample to detector) distance of 2,896 mm (through the ~ 30 cm diameter vacuum tube) was used. To minimise scattering and avoid an associated reduction in image contrast, the x-rays were propagated through a vacuum tube before reaching the detector. Animal experiments. All experiments were approved by the Monash University Animal Ethics Committee and conformed to the guidelines set out in the NHMRC Australian Code of Practice for the Care and Use of Animals for Scientific Purposes. β-ENaC mice (n = 15), aged 45-84 days (median = 62 days) at the time of imaging, were used for all experiments 26 . Mice were bred on a C57Bl/6N background, and supplied from our specific pathogen-free breeding colony (Monash Animal Research Platform). Littermate controls (n = 10) were used to minimise the effects of strain on the lung phenotype. Offspring were genotyped at 3 weeks of age via PCR of genomic DNA as previously described 26 . Mice were anaesthetised with an intraperitoneal (i.p) injection of a 10 μl/g body weight mixture of medetomidine (0.1 mg/ml, Orion Corporation, Finland) and ketamine (7.6 mg/ml, Parnell Laboratories, Australia), and surgically intubated. The endotracheal tube was attached to a custom-built small animal pressure-controlled ventilator (AcuVent, Notting Hill Devices, Australia) at 12 cmH 2 O PIP and 2 cmH 2 O PEEP with a respiratory rate of 120 breaths per minute (inspiration time of 0.15 s and an expiration time of 0.35 s). To maintain the normal dynamics of lung function, a paralytic was not used. Mice were mounted in a vertical position in front of the source on a custom high-precision rotation stage (Zaber Technologies, Vancouver, Canada). Mice were rotated through 360 degrees at 1.5 degrees per second while a flat-panel detector (PaxScan, Varian Medical Systems, Palo Alto, CA, USA) captured images at the rate of 30 Hz to acquire a total of 7,200 images per mouse. Image acquisition was triggered by the ventilator, and was gated to collect 15 images throughout the breathing cycle. Airway pressure and flow were monitored throughout the experiments. At the completion of the imaging experiments, global lung mechanics were measured using a modification of the forced oscillation technique (FOT). Mice were hyperventilated at 400 breaths per minute for 60 s to induce brief (6 s) periods of apnea. During apnea, an oscillatory signal, generated by a loudspeaker, containing 9 frequencies ranging from 4-38 Hz was introduced into the tracheal cannula via a wavetube of known impedance. The impedance of the respiratory system (Zrs) was calculated. A four-parameter model with constant phase tissue impedance 38 was fit to the data to Zrs spectrum allowing determination of tissue hysteresivity which is calculated as the ratio of the tissue damping to tissue elastance 39 . Image processing. To complete the XV cross-correlation analysis, the 7,200 projections were organised, or binned, into their time points resulting in 400 projections per time point, with a total of 15 time points across the 500 ms breathing period. Computed tomographic reconstruction was performed for each set of projections giving 15 separate CT reconstructions, one for each of the 15 stages of the breath. Each CT consisted of 1,024 slices, each 1,024 pixels by 1,024 pixels. The effective voxel size varied from mouse-to-mouse. Using Avizo software (ThermoFisher Scientific), the CT volume representing the beginning of the breath was used to create a mask for isolating the lung tissue from the rest of the animal. The volume of the mask was also used to determine the total voxel size of the lung, which, after accounting for variation in effective voxel size, was used to normalise the volumetric results. For XV, an interrogation region size of 64 ⨉ 64 ⨉ 64 pixels with an overlap of 50% between successive interrogation windows were used, producing a XV voxel size of 32 ⨉ 32 ⨉ 32 pixels. The XV output showed the magnitude and direction of the lung tissue motion vectors between each time point. This displacement of tissue denotes lung expansion, and was expressed in voxels per frame. The mainstem bronchi were removed from the expansion map images to improve clarity. Data availability The data that support the findings of this study are available on reasonable request from the corresponding authors. The XV analysis code that supports the findings in this study is not publicly available due to patent restrictions. Code may however be available from the authors upon reasonable request and with permission of Monash University and 4Dx Limited.
7,238
2020-07-02T00:00:00.000
[ "Biology" ]
Analysis of Heating and Cooling Loads of Electrochromic Glazing in High-Rise Residential Buildings in South Korea : This study compares the impact of the recently developed electrochromic glazing technology on load reduction by comparing it with the double-glazing and shading devices that are sold commercially for high-rise residential buildings in Korea. These buildings are similar to large office buildings in terms of their high window-to-wall ratio. The energy consumption of such buildings was simulated using an analytical model of a high-rise residential building. The patterns between the heating and cooling loads were found to be similar to that of office buildings, in that the cooling load was considerably higher than the heating load. This study hypothesizes that the load reduction performance of electrochromic glazing with variable solar control and high solar radiation rejection is superior to that of existing double-glazing products and shading devices. This hypothesis was tested by analyzing the cooling and heating loads of buildings with different types of double glazings. Bleached electrochromic glazing exhibited lower transmittance than colored glass double glazing, low-e double glazing, and double glazing with a shading device, and is thus not effective in reducing heating load. Colored electrochromic glazing provided higher solar radiation rejection than colored glass double glazing and low-e double glazing, and thus is effective in reducing cooling load. Research Background and Objective Windows and doors are the only glazing systems that can capture solar energy. They can also serve as a medium that reduces building load and improves the indoor environment through appropriate inflow and the rejection of solar radiation [1]. As glazing systems, windows and doors require separate shading devices for solar control, such as roll shades or blinds. However, smart glass, which does not need any separate shading device to control the optical properties of glass and, thus, solar radiation, has recently been actively researched in the literature. Types of smart glass include electrochromic, PDLC (polymer-dispersed liquid crystal) [2], and SPD (suspended particle device) [3]. Of these, this study analyzes the optical properties of electrochromic glazing and its effect on HVAC (Heating, ventilation, air conditioning) load reduction in buildings [4,5]. Regarding the main characteristics of electrochromic glazing, which are dealt with in this study, transmittance can be controlled in stages depending on the amount of electricity that is supplied to either the cathode (−) or the anode (+), and the scope of transmittance control covers the area of visible and near-infrared rays, which account for the largest portion of energy in the solar spectrum [6]. Therefore, with appropriate control according to changes in the outdoor environment using the characteristics of selective transmittance control, it is possible to contribute significantly to a reduction in heating and cooling loads as well as the energy used for lighting [7,8]. Based on these characteristics, studies have been actively conducted on electrochromic glazing control to reduce the amount of energy consumed by buildings using such data as indoor-outdoor temperatures and the amount of solar radiation and illumination. Moreover, the daylight glare index and lighting energy have been investigated in the context of indoor light to represent the characteristics that can control the range of visible rays [6,9]. According to the results of simulations of office buildings in the United States (US), 10-20% of the energy saving effect is obtained at the perimeters of buildings [10], and at least 54% of the energy can be saved in a Mediterranean climate [11]. A study showed that 37-48% more lighting energy can be saved compared with manual control-type blinds [12]. Furthermore, several simulations and experimental studies are ongoing on the performance of electrochromic glazing technology [13]. A few recent studies have examined electrochromic glazing that can intensively control near-infrared (NIR) rays. It maintains a visually clear state and secures visible rays related to lighting energy while controlling only the NIR rays that influence cooling and heating energy [14]. Moreover, photovoltaic-related electrochromic glazing is being developed, which is colored using power charged on batteries and produced using photovoltaic energy. Thus, it does not require a separate power supply, and is an eco-friendly product that is easy to build and does not consume operation energy [15]. Furthermore, electrochromic glazing can be controlled on smartphones using Wi-Fi in association with the Internet of Things. This provides convenient functions compared with existing shading devices [16]. Further, apart from electrochromic glazing with a glass substrate, electrochromic films with a web-coated flexible polyester (PET) substrate have also been developed, and have various applications [17]. Another advantage of electrochromic glazing is that unlike the blinds or roll shades installed indoors, it blocks solar radiation through glass, which is the outermost part of an insulated glazing unit. As a result, it is more effective at reducing the cooling load, owing to its high solar radiation rejection. Moreover, it has a memory effect whereby it maintains its state once supplied with electric power once, without requiring any further power once it is colored or bleached. This minimizes the operation energy that is needed. However, the drawback of most electrochromic glazing products is their low speed of response. It requires five minutes for a 10 cm × 30 cm glass [18] to be colored, and eight minutes and 12 min for a 1.2 m × 0.8 m glass to be colored and bleached, respectively [19]. On the contrary, a few recently released electrochromic glazing products can be colored within three minutes. These products show promise in enhancing the performance of commercialized products in the future [20]. The objective of this study is to analyze the optical properties of test products for the commercialization of large-area electrochromic glazing for the first time in Korea, to the best of the authors' knowledge, and determine whether the optical properties are effective at reducing the energy consumed by buildings. High-rise residential buildings in Korea with high window-to-wall ratios were assessed to this end. For performance comparison, this study analyzed the building load performance of existing double-glazing and shading devices that are available for commercial use in Korea. Research Method and Scope We created electrochromic glazing specimens to obtain the transmittance, reflectance, and absorptance for each range of spectra through measurements using a spectrum analyzer. Using raw data, we obtained the input data for WindowMaterial:Glazing, which is a component of structural glass on EnergyPlus, by employing the LBNL Optic 5.1 tool. We thus formed the double glazing, and obtained heat and optical data. The data for commercially available glass were obtained from the data for glass that is distributed in Korea on IGDB provided by LBNL Window 7.4. To analyze building loads using the heat and optical data, we used EnergyPlus8.5, a dynamic building energy analysis program, to model a high-rise residential building with a high window-to-wall ratio because of curtain walls, located in Seoul, Korea. We then applied 10 cases, including colored double glazing, low-e double glazing, double glazing with a shading device, and electrochromic double glazing on EnergyPlus, and comparatively analyzed building load performance through a parametric study. We focused on window heat gain, window heat loss, and hourly load change on typical days of winter and summer based on a design day. We hence analyzed the effects of the characteristics of heat flow of each glazing on the load. We ultimately analyzed monthly and annual loads, which we used to comparatively analyze the performance of electrochromic glazing and commercially available double-glazing and shading devices in terms of building load. Figure 1 show the research guideline and flowchart. Analysis of Optical Properties of Electrochromic Glazing Electrochromic glazing consists of a transparent conductive object, a colored layer, an ion storage layer, and an electrolyte. In this study, we used TEC 10 (manufactured by Pilkington) as a transparent conductive object with glass on which electric current can be applied. We coated the colored layer with tungsten oxide (WO 3 ) and the ion storage layer with nickel tungsten (NiW) using a sputtering device that could coat areas of up to 1500 mm × 1800 mm. We used LICLO 4 in gel form for electrolytes for ion movement, and formed a silver paste and bus bar electrodes to provide electrical power. Specimens of size of 50 mm × 50 mm were fabricated as shown in Figure 2 to analyze the optical properties of electrochromic glazing. We then measured transmittance and reflectance at a solar spectrum wavelength of 0.3-2.5 µm (0.005-µm intervals) using a spectrum analyzer. These measurements were performed with electrochromic glazing in the bleached and colored states at a voltage of 2 V in each state. To enter the raw data of transmittance and reflectance extracted through the spectrum analyzer for the WindowMaterial:Glazing component of EnergyPlus, we imported the data on the LBNL Optic 5.1 program, and extracted the spectral average data of solar transmittance, solar reflectance, visible transmittance, and visible reflectance, as shown in Table 1. Optical properties were analyzed using spectroscopic analysis and the Optic 5.1 tool, and we found a considerable difference in the solar transmittance (Tsol) and visible transmittance (Tvis) between the bleached and colored states. As shown in Table 1, solar transmittance (Tsol) could be adjusted by 41.6%, from 48.1% when bleached to 6.5% when colored, and visible transmittance (Tvis) could be adjusted by 52.8%, from 64.8% when bleached to 12% when colored. Building load can be predicted using spectral data. According to the transmittance graph shown in Figure 3, electrochromic glazing exhibited a higher visible ray rejection in the colored state than in the clear glass or single low-e glass state, and similar rejection to that of single low-e glass in the NIR range. Its total solar transmittance (Tsol) was low, and thus, it is effective in terms of cooling load reduction in the summer. In bleached state, its visible ray and NIR transmittance was higher than that of single low-e glass, and thus it is more likely to reduce heating load. However, its transmittance was lower than that of clear glass, and thus it is less likely to reduce heating load. Moreover, the front and back-side reflectance graphs provided in Figures 4 and 5 show that even though the reflectance of electrochromic glazing was lower than that of single low-e glass in the colored state, transmittance was low primarily due to absorption. Optical properties and building load can be predicted through spectroscopic analysis. However, for more accurate predictions, it is necessary to analyze the solar heat gain coefficient of the glazing system by considering the penetration, absorption, and reflection of glass, and to conduct analyses based on dynamic building energy simulations, which are described in the following section. Overview of Analytical Simulation Model The facades of recently constructed high-rise residential buildings in Korea are similar to those of office buildings, as they use curtain wall windows and doors with high window-to-wall ratios in order to provide a view. We set as our hypothesis the claim that high-rise residential buildings have high cooling loads, and by applying electrochromic glazing with variable solar control and high solar radiation rejection, this load can be significantly reduced in comparison with the reduction obtained by using double-glazing products or shading devices. Accordingly, we selected a high-rise residential building recently constructed in Seoul as an analytical model, as shown in Figure 6. EnergyPlus 8.5 (developed by the DOE (Department of Energy)) was used as simulation tool [21]. EnergyPlus is based on the heat balance equation, and thus can calculate heat transfer by using the conduction, convection, and radiation of windows and doors as well as the penetration, reflection, and absorption of solar radiation [22][23][24][25][26][27][28]. It also allows for a detailed analysis of heat transfer between the shading device, and doors and windows [29]. The floor area of the analytical model was 215.12 m 2 , the air conditioning area was 181.63 m 2 (84.4%), the height of the living space was 2.9 m, and the area occupied by the windows was 67.7 m 2 on an exterior wall with an area of 106.5 m 2 . Thus, the window-to-wall ratio was 63.5%. The analytical mode was constructed based on a typical floor. The ceilings, floors, and northern side wall were set as adiabatic boundaries. The southern, eastern, and unclear western exterior walls followed the insulation standards set by the Standard for Energy Saving Design in Buildings (2016), as shown in Table 2. The southern, eastern, and western windows had clear double glazing (6 mm clear + 12 mm air + 6 mm clear), with a heat transmission coefficient of 2.685 W/m 2 K, a solar heat gain coefficient of 0.714, and a visible ray transmittance of 0.790. Clear double glazing was applied to analyze the load of the baseline, and the following section describes the application of different types of glass. The temperatures set for the model were 22 • C and 28 • C, based on the indoor temperature standard for the cooling and heating systems in the Standard for Energy Saving Design in Buildings [30]. The heating period was assumed to span from October to March, and the cooling period was assumed to span from May to September. Weather data for Seoul were used. The city has a continental climate where the annual variation in temperature is as high as 30 • C, as high pressure from the Eurasian continent dominates the winter season, and oceanic air masses at high temperature and humidity significantly influence the weather in the summer. The IdealLoadsAirSystem (provided by EnergyPlus) was applied for load analysis of the HVAC system to eliminate interference by system variables as much as possible. Using the IdealLoadsAirSystem, the cooling and heating loads required in the zone were processed by a virtual air conditioning system with infinite capacity, and load calculation was performed using only the difference in enthalpy between the specified air supply and the mixed (ventilation + outside) air. The internal heat gain is shown in Table 3 with reference to the standard of ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.) fundamentals (2009) [31]. Multifamily two-zone (bedroom and other rooms) provided by the schedules of the datasets in EnergyPlus was applied for the internal heat gain schedules of the body, the lighting, and the equipment [32]. Outdoor air volume was set to 1.1 m 3 /m 2 h, which is in accordance with building energy rating certificate guidelines [30]. Analysis of Building Load Characteristics of Analytical Model As stated earlier, we hypothesize that the facades of high-rise residential buildings in Korea are similar to those of office buildings, and thus have high cooling loads. We found through the building load simulation that the heating load was 4285.1 kWh/year (16.0%), and the cooling load was 22,490.0 kWh/year (84%), as shown in Figure 7, Table 4. The energy consumption for heating and cooling may be different depending on the operating conditions or efficiency of the systems. However, as the systems are operated using IdealLoadsAirSystem, and their performance and relevant conditions are excluded, the load that is needed to continuously maintain the optimum temperature in light of indoor comfort was larger for cooling, as presumed in the hypothesis. The cooling load was high, because glass constitutes a large portion of the exterior wall, and thus the internal heat gain owing to solar penetration was dominant. On the contrary, in the analytical model, the heating load was reduced by increasing the internal heat gain through solar penetration. This is explained further in Section 4. Thus, for Korean high-rise residential buildings with high window-to-wall ratios, solar control may be key to reducing the building's energy consumption. We aim to test the second hypothesis, that electrochromic glazing with variable solar control and high solar radiation rejection is more effective at reducing the load on buildings compared with existing double-glazing and shading devices. Composition of Double-Glazing and Shading Devices for Comparative Analysis We collected data concerning the heat and optical properties of commercialized double-glazing and shading devices to analyze the building performance of electrochromic glazing in terms of reducing the energy consumption of buildings. IGDB (LBNL) was used for data collection, and Window 7.2 (LBNL) was used to obtain the data through double-glazing composition. For the first category, green, blue, and gray-colored glass was layered with clear glass to form ALT-01, 02, and 03 to compare with the colored glass that was already installed in the given building. To compare its performance with electrochromic glazing on equal terms, the glazing was layered with clear glass to form ALT-08. To compare with low-e glass, which was the second category, single low-e and double low-e glass was layered with clear glass to form ALT-04 and 05, respectively. "Single" and "double" here represent the number of soft low-e coatings. Emissivity decreased as the number of coatings increased; this lowered the solar heat gain coefficient (SHGC) and the heat transmission coefficient. We used ALT-08 data to compare the performance on equal terms with low-e glass. We selected Venetian blinds and roll shades as the shading devices for the third category. These devices are used in high-rise residential buildings in Korea. We formed ALT-06 and 07 by integrating them with the double glazing, which was a combination of single low-e glass and clear glass. Both shading devices were assumed to be installed indoor, open in winter (October-March), and closed in summer (May-September), with a focus on energy reduction. Moreover, the slats of the Venetian blinds were completely shut in the closed state. The reflectance of the blinds and roll shades was set to 80%, and their transmittance was set to 0%. For transmittance, we selected the condition where load reduction for cooling was the highest based on the results of past studies [33]. Electrochromic glazing consisted of the double glazing that was a combination of electrochromic glazing and single low-e glass for the comparative analysis of performance on equal terms with the shading devices. It was assumed that the electrochromic glazing was bleached in the winter and colored in the summer for energy reduction. Table 5 and Figure 8 show data concerning the heat and the optical properties of each type of double glazing for the comparative evaluation of load reduction performance. Performance Evaluation of Hourly Heating and Cooling Loads For the evaluation of hourly heating and cooling loads, we analyzed the hourly changes in heating load according to the type of double glazing under the conditions obtained on the design day using meteorological data for Seoul. As described above, the indoor cooling and heating temperatures were set to 22 • C and 28 • C, respectively, using the IdealLoadsAirSystem. This was done to calculate the heating and cooling loads that were necessary to reach the indoor temperature, based on which we can check for the reduction in heating and cooling loads according to type of double glazing. 4.1.1. Comparative evaluation of hourly heating load performance Figure 9 and Table 6 show a comparison of the performance of different types of double glazing that are listed in Table 5 in terms of reducing heating load. Figures 10-12 show the changes in heating load for each type of double glazing. This load is always high when electrochromic glazing is applied, regardless of time. We analyzed heat flow through glass to determine the factors directly affecting heating load, and the relevant graph is shown in Figure 9, Table 7. We selected Display AdvancedReportVarialbles of Output:Dignostics as the data output of the EnergyPlus simulation tool, which supports a detailed analysis of the heat gain and heat loss of glass. The formula for calculating heat flow is given in Equation (1): where Q indicates the accumulated heat flow through the glass over time (i), which is calculated by subtracting the hourly accumulation of short-wave radiation back out (SR i ) from the hourly accumulation of transmitted solar (TS i ), convective heat flow (Cv i ), and net IR heat flow (IR i ). A positive value (+) implied heat gain, whereas a negative value (−) indicated heat loss. SR i is the transfer of short-wave radiation from the zone back out the window in watts. This is a measure of the diffuse short-wave light (from reflected solar and electric lighting) that leaves the zone through the window. Figure 9 shows no significant difference in heat loss for double glazing during the night when there was no solar radiation. This is because the heat transfer coefficient of each type of double glazing was similar. However, the low solar heat gain coefficient of electrochromic glazing resulted in low solar penetration during the day; this had the strongest impact on the increase in heating load. In other words, compared with clear double glazing and colored glass, electrochromic glazing has unfavorable heat and optical properties in terms of reducing heating load. Therefore, based on the cumulative heat flow in Table 6, it appears necessary to achieve as much solar penetration as that for clear double glazing, or to increase the heat transmission coefficient to minimize the heat loss and compensate for the heat lost overnight to improve the performance of electrochromic glazing. Figure 11, Table 8 compares the performance of electrochromic glazing with that of low-e double glazing in terms of reducing heating load, and shows that the heating load of electrochromic glazing was the highest. Table 8 shows the daily cumulative heating load for each type of double glazing. The heat flow shown in Figure 13 was analyzed to determine the cause of the increase in the heating load of electrochromic glazing. The heat transmission coefficient of single and double low-e double glazings is lower than that of clear double glazing and electrochromic glazing. Thus, heating load was low at night when there was no solar radiation. It should be noted here that after 10:00 a.m. (Figure 11), the heating load of double low-e glazing increased to a greater extent than that of clear glass. This might have been because the heat transmission coefficient of double low-e glazing was low, as shown in Figure 13, Table 9 and, thus, heat loss was low at night. However, the heat gain became the lowest as solar penetration decreased compared with clear glass when the solar radiation began. Thus, it appears that a combination of double glazing with single low-e glass, which can suitably maintain transmittance while increasing heat insulation, is more effective at reducing building energy in the winter in light of the variable solar radiation of electrochromic glazing. Figure 12, Table 10 compares the performance of electrochromic glazing with that of shading devices in terms of reducing heating load. The double glazing was formed by combining single low-e glass with all shading devices and electrochromic glazing, based on the results of a comparative analysis with low-e glass. It can be seen that the heating load of electrochromic glazing decreased to a greater extent than that of clear glass, because applying single low-e glass reduced the heat transmission coefficient, which in turn reduced the heat lost overnight, as shown in Figure 14, Table 11. The blinds and roll shades were kept open in winter and, thus, heating load and heat flow were the same as those for single low-e glazing. As the solar heat gain coefficient of electrochromic glazing is lower than that of single low-e glazing, its heating load was still higher than that of double glazing when the blinds and roll shades were open. In summary, transmittance was low when the electrochromic glazing was bleached, and hence, was not effective in terms of reducing heating load. Therefore, work is needed to increase transmittance in the bleached state when developing electrochromic glazing in the future. Figure 15, Table 12 compares the performance of electrochromic glazing in terms of cooling load with that of colored glass double glazing. Cooling load is always low when electrochromic glazing is applied, regardless of time. The daily cumulative cooling load for each type of double glazing is provided in Table 12. The magnitude of reduction in cooling load was larger than the increase in the heating load. Such a reduction in the cooling load of electrochromic glazing occurred because, as shown in Figure 16, Table 13, there was no considerable difference between the overnight heat loss for double glazing. This is because the heat transmission coefficients were similar and solar penetration was low owing to the nature of electrochromic glazing, which has a low solar heat gain coefficient. In other words, compared with clear double glazing and colored glass, electrochromic glazing has favorable heat and optical properties in terms of reducing cooling load. Figure 17, Table 14 shows a comparison of the performance of electrochromic glazing in terms of cooling load with that of low-e double glazing. It is clear that the cooling load of electrochromic glazing was lower. The analysis of heat flow in Figure 18, Table 15 shows no significant difference in overnight heat loss due to the difference in the heat transmission coefficient in summer. The heat gain due to the difference in the solar heat gain coefficient during the day had the strongest impact on the reduction in the cooling load. In other words, unlike in winter, where there was a complex impact of the heat transmission coefficient and solar heat gain coefficient, only the effect of the solar heat gain coefficient was dominant in summer. Therefore, as shown in Figure 17, electrochromic glazing with the lowest solar heat gain coefficient was the most effective in terms of reducing the cooling load. Figure 19, Table 16 compares the performance of electrochromic glazing with that of the shading devices in terms of the cooling load. Similar to the case in winter, single low-e glass was combined with all of the shading devices and electrochromic glazings to form double glazing; blinds and roll shades were closed, and the electrochromic glazing was bleached. A reflectance of 90% and a transmittance of 0% were applied to the blinds and rolls shades to achieve the highest reduction in cooling load in the summer, and the slats of the blinds were assumed to be completely closed. We observed that electrochromic glazing yielded an excellent performance in terms of reducing cooling load. While shading devices block solar radiation, they are installed indoors, and thus, the solar radiation that penetrates through the glass is partly absorbed by them, and flows indoors in the form of long-wave radiation, thereby affecting the heat gain. Table 17 shows that the heat gain of the blinds and roll shades was high because of the convection and radiation between the double glazing and the blinds. Figure 20 shows that the indoor temperature of the glass surface was higher than that of the electrochromic glazing. Therefore, electrochromic glazing is more effective than the currently used shading devices as a solar control system that blocks solar radiation. As shown in Figures 15, 20 and 21, a comprehensive analysis of hourly heating and cooling loads revealed that the low transmittance values of the colored electrochromic glazing resulted in the largest reduction in solar gain from 07:00 a.m. h to 17:00 p.m., and was thus more effective at reducing cooling loads than the colored, low-e, and shading devices. Figure 22 and Table 18 show the total annual cooling and heating loads. Even though electrochromic glazing is sub-optimal at reducing heating loads, it was the most effective at reducing the total annual load in buildings with the cooling load being dominant, which is similar to the analytical model in this study. In particular, compared with shading devices that are mainly used in high-rise residential buildings using the baseline, electrochromic glazing significantly reduced the total load by up to 12.7-14.9%. This implies that electrochromic glazing has potential for use in next-generation structural glass. Monthly and Annual Changes in Heating and Cooling Loads creffig:sustainability-280043-f023,fig:sustainability-280043-f024,fig:sustainability-280043-f025 show the monthly cumulative changes in heating and cooling loads, according to the type of double glazing. The same trend as that in the analysis of hourly cooling and heating loads was observed. Compared with all of the other types of double glazing, the heating load of electrochromic glazing was high in winter even when bleached, and this contributed to a reduction in cooling load in the colored state in the summer. In other words, applying double glazing is more effective for buildings with high cooling loads. Conclusions To assess the load reduction performance of the recently developed domestic electrochromic glazing, this study conducted a comparative analysis of the double-glazing and shading devices that are used for high-rise residential buildings in Korea. The results of the analysis can be summarized as follows: (1) To test the hypothesis that the cooling load of the high-rise residential buildings was high, we analyzed the annual load using an analytical model on the EnergyPlus simulation tool. Heating and cooling loads comprised 84.0% and 16.0% of the total load, respectively, with the load pattern focused more on cooling. (2) We hypothesized that the load reduction performance of electrochromic glazing with variable solar control and high solar radiation rejection is better than that of double-glazing products and shading devices, and analyzed the cooling and heating loads for each type of double glazing. (3) In South Korea, which has different weather conditions in summer and winter, controlling solar radiation is important. The analysis of optical properties of electrochromic glazing showed that the solar transmittance values were 48.1% and 6.5% in the bleached and colored states, respectively. This indicates that allowing and blocking solar radiation during winter and summer, respectively, are effective for reducing the heating and cooling loads in buildings. (4) Colored electrochromic glazing had a higher solar radiation rejection than colored glass double glazing and low-e double glazing. Thus, it is effective for reducing cooling load. Moreover, it was observed to be excellent in terms of cooling load reduction compared with shading devices (blinds and roll shades), because such devices are installed indoors. Thus, the solar radiation that penetrates through the glass is partly absorbed by the shading devices, and flows indoors in the form of long-wave radiation, thereby reducing the heat gain due to convection and radiation. However, electrochromic glazing is applied to the outermost part to reduce the inflow of solar radiation. (5) Electrochromic glazing was found to be more effective for reducing the cooling loads compared with the heating loads. However, in buildings with dominant cooling loads, similar to the analytical model used in this study, it exhibited the best performance in terms of reducing the total annual load. The limitations of this study are that it failed to analyze the effect of energy consumption based on the use schedule of the HVAC unit, the effect of load reduction on selecting the capacity of the HVAC system, and LCC (Life Cycle Cost) analysis according to initial investment cost and actual energy consumption. Therefore, it is necessary to examine the actual reduction in cooling and heating energy when cooling and heating systems are applied to buildings based on this study. Furthermore, we controlled transmittance by limiting the dependent variables of the control of electrochromic glazing to cooling and heating loads, and categorizing the glazing as in the colored state in summer and bleached state in winter. However, by expanding the scope of dependent variables according to changes in the environment, more significant findings can be obtained in terms of building energy and the construction environment. Furthermore, we limited the shading devices that were used to internal venetian blinds and roll shades, which are widely used in buildings in South Korea. Thus, further research is needed for a comparative analysis between external shading devices and electrochromic glazing. This study also made a simplistic distinction between the summer colored state and the winter bleached state in order to control transmission. This control strategy was chosen with the aim of analyzing the effects of the optical properties of the proposed electrochromic glazing, which exhibited bleached and discolored states, on heating and cooling loads. However, a key advantage of electrochromic glazing is that transmission can be controlled in stages. Further research is needed to examine an optical control method to integrate energy due to cooling, heating, and lighting.
7,448
2018-04-09T00:00:00.000
[ "Engineering", "Environmental Science" ]
Targeting carbonic anhydrase IX improves the anti-cancer efficacy of mTOR inhibitors The inhibition of the mechanistic target of rapamycin complex 1 (mTORC1) by chemical inhibitors, such as rapamycin, has demonstrated anti-cancer activity in preclinical and clinical trials. Their efficacy is, however, limited and tumors eventually relapse through resistance formation. In this study, using two different cancer mouse models, we identify tumor hypoxia as a novel mechanism of resistance of cancer cells against mTORC1 inhibitors. Indeed, we show that the activity of mTORC1 is mainly restricted to the non-hypoxic tumor compartment, as evidenced by a mutually exclusive staining pattern of the mTORC1 activity marker pS6 and the hypoxia marker pimonidazole. Consequently, whereas rapamycin reduces cancer cell proliferation in non-hypoxic regions, it has no effect in hypoxic areas, suggesting that cancer cells proliferate independently of mTORC1 under hypoxia. Targeting the hypoxic tumor compartment by knockdown of carbonic anhydrase IX (CAIX) using short hairpin RNA or by chemical inhibition of CAIX with acetazolamide potentiates the anti-cancer activity of rapamycin. Taken together, these data emphasize that hypoxia impairs the anti-cancer efficacy of rapalogs. Therapeutic strategies targeting the hypoxic tumor compartment, such as the inhibition of CAIX, potentiate the efficacy of rapamycin and warrant further clinical evaluation. INTRODUCTION Over the last decade, extensive research focused on targeting signaling pathways that are deregulated in cancer and promote tumor growth.In this context, blocking the mechanistic target of rapamycin (mTOR) has demonstrated clinical benefits in cancer patients that are however limited due to the development of resistance mechanisms [1][2][3][4].Hence, it is important to identify these mechanisms in order to improve the efficacy of mTOR inhibitors.mTOR exerts its biological functions as a subunit of two different protein complexes; the mTOR complex 1 (mTORC1) and the mTOR complex 2 (mTORC2) [5].mTORC1 is activated by growthpromoting stimuli such as growth factors or amino acids.In contrast, unfavorable growth conditions such as low energy levels or hypoxia lead to mTORC1 inactivation.Once activated, mTORC1 controls cell growth through various mechanisms including protein, lipid and nucleotide synthesis as well as inhibition of autophagy [5,6].In the context of cancer, overactivation of mTORC1 is frequently observed in human tumors caused either by activating mutations of upstream components of the mTOR signaling pathway or by mutations of mTOR itself [7,8].Accordingly, several preclinical and clinical studies have evaluated the anti-cancer efficacy of mTORC1 Research Paper inhibition by the chemical inhibitor rapamycin and its analogs termed rapalogs.Following encouraging results in mouse models, the efficacy of rapalogs was however lower than expected in patients [3]. Regions of hypoxia are frequently present in tumors and profoundly influence the biology of cancer and its response to therapies [9,10].Classically, a high rate of cancer cell proliferation combined with structural abnormalities of tumor endothelial cells induces regions in tumors featuring low oxygen levels.Tumor cells are able to quickly adapt to this hypoxic microenvironment by inducing the transcriptional activity of hypoxia inducible factors [11].Among the different proteins whose expressions are stimulated by hypoxia, emerging evidence point out the importance of the carbonic anhydrase IX (CAIX) enzyme in promoting cancer cell growth [12].Indeed, blocking CAIX either by genetic manipulations or by using chemical inhibitors significantly reduces the growth of tumor xenografts in mice [13][14][15].Moreover, targeting CAIX enhances the anti-cancer efficacy of anti-angiogenic therapies or radiotherapy [14,16].Furthermore, expression of CAIX in human tumor samples is associated with tumor progression and poor prognosis [17,18]. In this study, we found that mTORC1 activity was diminished in hypoxic tumor regions.Consequently, rapamycin failed to reduce cancer cell proliferation in these regions.We further observed that combining treatments that target components of the hypoxic tumor response, such as CAIX, potentiated the anti-cancer efficacy of rapamycin.Taken together, these results show that hypoxic cancer cells proliferate independently of mTORC1 and are hence intrinsically resistent to mTOR inhibitors.They further provide a rationale to combine CAIX and mTOR inhibitors in cancer therapy. RESULTS mTORC1 activity in cancer cells is reduced in hypoxic regions of the tumor It was previously reported that hypoxia inhibits mTORC1 activity in vitro [19].We therefore first hypothesized that mTORC1 function is mainly present in non-hypoxic areas of a tumor.To test this, human colorectal adenocarcinoma cell line HT29 xenografts were generated in nude and murine colon adenocarcinoma cell line MC-38 allografts in C57BL/6 mice.After tumor harvest, mTORC1 activity and hypoxia were detected using immunohistochemical staining for phospho-S6 ribosomal protein (pS6) and pimonidazole respectively.In addition, proliferating cell nuclear antigen (PCNA) staining was applied to assess cancer cell proliferation.We found that pS6 and pimonidazole stainings negatively correlated, whereas PCNA staining revealed proliferation in both compartments (Figure 1A).Similarly, the staining of phospho-4E-BP1, another downstream target of mTORC1, was predominantly found in pimonidazole negative tumor areas (Supplementary Figure S1).This suggests that, in hypoxic zones, cancer cells proliferate despite the reduction of mTORC1 activity (Figure 1A and 1B).Proliferation rate was however significantly decreased in hypoxic compared to non-hypoxic regions (proliferation rate in hypoxic region: HT29 71.4 %, MC-38 68.9 %; proliferation rate in non-hypoxic region: HT29 85.7 %, MC-38 86.4 %) (Figure 1B). Rapamycin selectively reduces proliferation of cancer cells in non-hypoxic zones of tumors We next hypothesized that, since mTORC1 activity is reduced in hypoxic regions, blocking mTOR with rapamycin would not influence cancer cell proliferation in these regions.To test this, nude mice bearing HT29 tumor xenografts or C57BL/6 mice bearing MC-38 allografts were treated with rapamycin or vehicle as a control.We found that, whereas rapamycin significantly reduced tumor cell proliferation in non-hypoxic zones (HT29: vehicle 85.7 %, rapamycin 74.3 %; MC-38: vehicle 86.4 %, rapamycin 74.2 %), it had no effect on hypoxic areas (HT29: vehicle 71.4 %, rapamycin 73.6 %; MC-38: vehicle 68.9 %, rapamycin 70.5 %) (Figure 2A).In accordance with these in vivo results, we found no anti-proliferative effect of rapamycin on cancer cells cultured in hypoxic conditions (1 % oxygen) (Figure 2B).Interestingly, in vivo, rapamycin increased the hypoxic tumor compartment compared to controls in both HT29 tumor xenografts (from 18.5 % to 37.6 %) and MC-38 tumor allografts (from 14.0 % to 38.2 %) (Figure 3A).Consistent with this, we observed that rapamycin increased the level of CAIX protein expression from 18.2 % to 32.4 % in HT29 tumors and from 7.8 % to 22.8 % in MC-38 tumors.This increased expression of CAIX was associated with a reduction of CD31 positive tumor blood vessels (Figure 3A).Upregulation of CAIX levels in the treated xeno-and allografts was confirmed by qRT-PCR (Figure 3B). The anti-cancer activity of rapamycin is increased in combination with acetazolamide Since rapamycin treatment increases the expression of CAIX, and CAIX is known to promote cancer progression, we next asked whether blocking CAIX activity would improve the efficacy of rapamycin.To test this, we treated nude mice bearing HT29 tumor xenografts or C57BL/6 mice bearing MC-38 allografts with acetazolamide alone or in combination with rapamycin.Although acetazolamide is a non-specific inhibitor of carbonic anhydrase enzymes, we opted for it as it is a regularly used agent and well tolerated by patients [20].We found that both acetazolamide and rapamycin alone reduced tumor growth.The effect was however significantly stronger when both agents were combined (Figure 4A and 4B).The effect was long-lasting as, after three months of treatment, HT29 tumor xenografts did still not exceed a size of 230 mm 3 (Figure 4C).Histological analysis revealed that acetazolamide increased tumor necrosis (from 7.4 % to 53.8 % and from 7.8 % to 44.8 % in HT29 and MC-38 tumors respectively) and the number of tumor blood vessels (increase by 78.4 % in HT29 and 93.3 % in MC-38 tumors) (Figure 5).Interestingly, acetazolamide reduced proliferation in hypoxic (from 71.4 % to 54.6 % and from 68.9 % to 54.5 % in HT29 and MC-38 tumors respectively) but not in non-hypoxic regions, whereas the opposite was observed with rapamycin.Combined treatment with rapamycin and acetazolamide produced antiproliferative effects in both the hypoxic and non-hypoxic area (Figure 6A and 6B).Taken together, these data demonstrate that combining acetazolamide with rapamycin exhibits stronger anti-cancer effects than either drug alone. Targeting CAIX potentiates the anti-cancer efficacy of rapamycin We next investigated whether among the different carbonic anhydrase enzymes, targeting specifically CAIX would potentiate the anti-cancer effect of rapamycin.To test this, we knocked down CAIX in HT29 cells (HT29 shCAIX) by infecting HT29 cells with lentiviruses containing CAIX shRNA.Knockdown of CAIX was confirmed by qRT-PCR (Figure 7A) and by immunostaining of tumor xenografts (Figure 7E).HT29 shCAIX or control HT29 expressing a scramble shRNA were injected subcutaneously into nude mice, and tumor growth was monitored.We found that the growth of HT29 shCAIX tumor xenografts was reduced compared to control HT29 (Figure 7B).Rapamycin treatment further reduced the growth of HT29 shCAIX tumor xenografts.This growth reduction was also stronger compared to rapamycin treated control xenografts.The tumor size of shCAIX xenografts was still below 200 mm 3 after a long term rapamycin treatment of 20 days (Figure 7c).CAIX knockdown reduced proliferation in the hypoxic but not the non-hypoxic tumor regions.A combined inhibition of mTORC1 and CAIX provoked antiproliferative effects in both the hypoxic and non-hypoxic area (Figure 7d).Furthermore, CAIX knockdown decreased tumor hypoxia and tumor necrosis, but increased CD31 positive tumor vasculature (Figure 7e). Taken together, these results emphasize that an inhibition of CAIX potentiates the efficacy of rapamycin. DISCUSSION Targeting mTOR is a promising approach in cancer therapy.However, in the course of time, cancers adapt to mTOR inhibition and develop resistance mechanisms that are responsible for the limited efficacy of mTOR inhibitors [2].To date, most identified resistance mechanisms are the consequence of an abrogation of negative feedback loops following mTORC1 inhibition [21].In this study, we show that the hypoxic microenvironment of a tumor is resistant to mTOR inhibitors, and we propose that targeting the hypoxic region of a tumor is an effective approach to enhance their anti-tumor efficacy. Most tumors are characterized by regions of hypoxia that contribute to chemo-and radioresistance [9].Our findings show that hypoxia also contributes to resistance to mTOR inhibitors.Indeed, we outline that mTORC1 activity is not present in hypoxic regions of a tumor as evidenced by a mutually exclusive staining pattern of pimonidazole and pS6 (Figure 1A).Consistent with this observation, it was reported that mTORC1 activity negatively correlated with HIF-1α expression in renal cancer xenografts [22].Furthermore, we observe that rapamycin reduces cancer cell proliferation in non-hypoxic but not in hypoxic regions of a tumor (Figure 2A).This further supports our hypothesis that hypoxic cancer cells proliferate independently of mTORC1.Consistent with our observation that rapamycin exerts a tumor region selective anti-proliferative effect, it was reported that, whereas rapamycin decreases proliferation in the outer well vascularized part of tumors, it promotes cancer cell proliferation in hypovacular areas of tumors [23]. Another explanation for the loss of antiproliferative activity of rapamycin in hypoxic tumor regions is a reduced access of rapamycin to hypoxic areas.Indeed, rapamycin is well characterized for its anti-angiogenic properties [24,25].Consistent with this, we found that rapamycin decreased the number of blood vessels and increased tumor hypoxia which could secondarily lead to reduced rapamycin delivery to hypoxic tumor regions. Adaptation of cancer cells to hypoxia involves the expression of many genes that are under the control of the transcription factor HIF-1 [9,26].Interestingly, several studies have demonstrated that HIF-1 expression is positively regulated by mTORC1, which would argue for a control of hypoxic gene activation by mTORC1 [27][28][29].The role of mTORC1 in regulating hypoxia-induced HIF-1 expression seems however complex and might depend on the level of hypoxia.Actually, whereas rapalogs were shown to partially reduce the level of HIF-1 induced by mild hypoxia (3 % O 2 ), this effect was abrogated in conditions of severe hypoxia (0.3 % O 2 ) [22].Consistent with this, we demonstrate that rapamycin increases the expression of CAIX which is under the control of HIF-1 (Figure 3).This further suggests that HIF-1-mediated hypoxic gene activation is not controlled by mTORC1 in the hypoxic tumor microenvironment. Among the various proteins that are controlled by HIF-1, mounting evidence has outlined the importance of CAIX in cancer biology.In different experimental settings, CAIX knockdown reduced the growth rate of tumor xenografts, and conversely, overexpression of CAIX increased tumor growth [13][14][15].Similarly, we see that CAIX knockdown slows the progression of tumor xenografts (Figure 7B and 7C).We further show that cancer cell proliferation in hypoxic zones is reduced following knockdown of CAIX (Figure 7D), highlighting the importance of CAIX to maintain favorable growth conditions in a hypoxic environment.Similarly, in vitro, it was demonstrated that CAIX expression was associated with proliferation in hypoxic areas of cancer cell spheroids [14].Therefore, CAIX actively participates in tumor growth, and accordingly, selective inhibitors of CAIX are currently tested in clinical trials [30,31]. The primary function of CAIX is the hydration of carbon dioxide to bicarbonate and proton, maintaining an alkaline intracellular pH and promoting an acidic extracellular space [30,32].Our observation therefore suggests that increasing the tumor pH via inhibition of CAIX might potentiate the efficacy of mTOR inhibitors.Of note, acidic extracellular pH inhibits mTORC1 function [33,34].Consequently, in addition to hypoxia, acidic tumor pH might further downregulate mTORC1 activity and promote an mTORC1-independent cancer cell proliferation.Besides CAIX, several other proteins are implicated in the regulation of tumor pH, including bicarbonate transporters, proton pump, Na + /H + exchanger 1 or monocarboxylate transporter [35,36].Future studies will characterize whether modifying the functions of these proteins might influence the anti-cancer efficacy of mTORC1 inhibitors. A combination of rapamycin with CAIX inhibitors is further justified by the observation that rapamycin treatment increases the extent of hypoxia in tumors which is associated with an increased expression of CAIX (Figure 3).This observation relies mostly on the reduction of the number of tumor blood vessels induced by rapamycin.Therefore, targeting CAIX is of importance in combination with treatments that increase tumor hypoxia such as anti-angiogenesis therapies.A reinforcement of the anti-tumoral activity of the anti-VEGF antibody bevacizumab by an inhibition of CAIX further supports this hypothesis [14]. Our study further underlines the benefit of therapeutic approaches that target the hypoxic tumor response in combination with anti-angiogenic agents.Consistent with our findings, such an approach has shown promising results in other pre-clinical models [37].For example, targeting HIF1α increases the efficacy of bevacizumab in neuroblastoma xenografts [38].A similar effect has also been reported when bevacizumab is combined with acetazolamide [14] [39].More importantly, such strategies are also tested in clinical trials [37]. Combining inhibitors of the hypoxic tumor response with rapamycin could be particularly beneficial compared to their combinations with other anti-angiogenic drugs.Indeed the importance of mTORC1 in tumors is not restricted to tumor vessels; mTORC1 also affects cancer cells and cells present in the tumor microenvironment.Hence, blocking mTORC1 can lead to unexpected effects due to the variety and complexity of cellular responses induced by mTORC1 inhibition.For example it was demonstrated that in a mouse model of K-Ras-induced pancreatic tumors, mTORC1 has opposing effect on tumor cell proliferation in nutrient-rich versus nutrient-depleted conditions.Whereas it blocks tumor cell proliferation in vascular tumor areas, it increases tumor cell proliferation in hypovascular, nutrient depleted regions, resulting overall in tumor growth [23].This further outlines that targeting the hypoxic tumor regions can potentiate the efficacy of mTORC1 inhibitors. Preclinical studies have underlined a role of acetazolamide in restraining cellular processes involved in tumor progression.For instance, renal cancer cell invasiveness and survival is diminished by acetazolamide in vitro [40,41].In addition, acetazolamide has shown anti-tumor properties in murine models [14,42,43].Our data further demonstrate that acetazolamide-induced tumor growth inhibition is associated with reduced cancer cell proliferation in hypoxic zones (Figure 6A and 6B), which is in accordance with what we observe following selective knockdown of CAIX (Figure 7D).However, whereas CAIX knockdown significantly reduces tumor necrosis (Figure 7E), treatment with acetazolamide results in increased necrosis (Figure 5).The non-selective property of acetazolamide for the inhibition of carbonic anhydrase isoforms might explain this discrepancy. We found that acetazolamide treatment or CAIX knockdown in cancer cells significantly increased the number of blood vessels in tumors.The molecular mechanisms underlying this effect remain however uncharacterized.Acidity is known to reduce endothelial cell proliferation and migration as well as endothelial cell sprouting [44,45].Hence, increasing intratumoral pH following the inhibition of carbonic anhydrase enzymes could favor tumor angiogenesis.A direct effect of acetazolamide on endothelial cells has also to be considered.Indeed, carbonic anhydrase II has been detected on tumor endothelial cells, and its inhibition could affect endothelial cell functions that are relevant to angiogenesis [46].Clearly, additional investigations are needed to fully characterize the effect of targeting carbonic anhydrase enzymes on tumor endothelium. The mechanisms responsible for the anti-cancer activity of rapamycin are complex and not restricted to its effect on tumor and endothelial cells [2].Consistent with this, the antiproliferative effect of rapamycin on cancer cells that we describe here is small and cannot solely account for the anti-cancer activity of rapamycin in our study as well as in patients.Recent studies have demonstrated the complex role of mTORC1 in the immune system [47].Although classically rapamycin is used as an immunosuppressive drug, several evidence point out that in distinct conditions rapamycin favors CD8 + memory response and hence mediates an immunostimulatory response [48][49][50].Thus, besides the antiproliferative effects of rapamycin, several other aspects contribute to the anti-cancer activity of rapamycin and need to be fully identified.Our experimental set-up did not allow to investigate the tumor immune response.Hence future experiments will determine whether inhibition of the hypoxic tumor response in combination with rapamycin positively influences tumor immune response. Tumors are characterized by genetic and epigenetic alterations that play a fundamental role in malignant transformation.However, genetic and epigenetic alterations display a spatial and temporal heterogeneity and vary in consequence of anti-cancer treatments [51].This tumor heterogeneity may account for the limited efficacy of targeting therapies.Our findings further emphasize that the tumor microenvironment affects signaling pathways that participate in tumor growth, adding another level of complexity to the molecular heterogeneity of tumors. In summary, our study shows that mTORC1 activity is reduced in hypoxic tumor regions which consequently resist to mTORC1 inhibitors.Targeting the hypoxic microenvironment represents a novel therapeutic strategy to potentiate the efficacy of mTORC1 inhibitors. Stable transfection To knockdown CAIX in HT29 cells, we used lentivirus produced by the lenti-vpak packaging kit from OriGene following the manufacturer's instructions and containing the CAIX gene-specific shRNA expression vector (TL314250B) or a negative control non-effective HuSH 29-mer scrambled shRNA cassette (TR30021) in pGFP-C-shLenti plasmid.Cells were grown under selective pressure (puromycin 10 μg/ml), and the knockdown efficiency was tested by qRT-PCR. qRT-PCR RNA extraction was performed using RNeasy Mini Kit from Qiagen by following the manufacturer's instructions.We used 500 ng of RNA for reverse transcription with SuperScript II Reverse Transcriptase from ThermoFisher Scientific.The resulting cDNA was used for qRT-PCR (Rotor-Gene Q from Qiagen).qRT-PCR were set up in triplicates with KAPA SYBR FAST qPCR Kit Master Mix Universal KK4602 from Kapa Biosystems.Relative gene expression and fold changes were determined using the 2 -ΔΔCT method using GAPDH as an internal control [53].Primer sequences were: human CAIX forward GGG TGT CAT CTG GAC TGT GTT, human CAIX reverse CTT CTG TGC TGC CTT CTC ATC, human GAPDH forward CCA TGG GGA AGG TGA AGG TC, human GAPDH reverse ACG TAC TCA GCG CCA GCA TC, mouse CAIX forward GCT GTC CCA TTT GGA AGA AA, mouse CAIX reverse GGA AGG AAG CCT CAA TCG TT, mouse GAPDH forward AAG AGG GAT GCT GCC CTT A, mouse GAPDH reverse TTG TCT ACG GGA CGA GGA AA. Immunohistochemistry Xeno-and allografts were fixed in formaline 4 % overnight, dehydrated with ethanol and paraffinembedded.Sections of 3 μm were obtained using MICROM HS355S microtome, and tissue sections were mounted on Superfrost Plus slides.Slides were then deparaffinized and rehydrated with xylol and alcohol.After antigen retrieval (citrate pH 6.0 or TRIS/ EDTA pH 9.0), sections were immunostained using above-mentioned primary antibodies for 60 minutes incubation time and subsequently incubated with Dako EnVision HRP secondary rabbit or mouse antibody for 30 minutes.One section from each xenograft and allograft tumor and three tumors for each condition were analyzed for each staining.Carl Zeiss Axioscope, AxioCam MRc and AxioVision 40V 4.6.3.0 software from Carl Zeiss Imaging Solutions GmbH were used for microscopy, imaging acquisition and image processing. Mouse models Animal experiments were in accordance with the Swiss federal animal regulations and approved by the local veterinary office.Female nude and C57BL/6 eight-week old mice were purchased from Janvier Labs.HT29 cells (3 x 10 6 ) and MC-38 (1 x 10 6 ) cells were injected subcutaneously into the right flank.Once the tumor xeno-/allografts reached a mean size of 25 mm 3 , mice were randomized into different groups (n=5/group; groups "vehicle" -"acetazolamide" -"rapamycin" -"acetazolamide and rapamycin") and treated daily with rapamycin (3 mg/kg/day, intraperitoneally), acetazolamide (40 mg/kg/day, intraperitoneally), a combination of both or vehicle only.Tumor volumes were measured daily using a caliper and calculated with the formula V = A * B * C * π / 6 where A is the length, B the width and C the height of the tumor.Animals were sacrificed once the biggest tumor of vehicle treated mice reached the size of 1000 mm 3 (defined as interruption criterion according to veterinary recommendations).Pimonidazole HCl 15 mg/ml in NaCl 0.9 % was injected intraperitoneally at a dosage of 60 mg/kg, 60 minutes before tissue harvest.Tumors were excised and samples processed for immunohistochemical analysis. Statistics Statistical analysis including Student's t-test, Oneway ANOVA and Two-way ANOVA were carried out as appropriate using GraphPad Prism version 6.05. Figure 1 : Figure 1: mTORC1 activity is reduced in hypoxic regions of a tumor.A. Serial sections of HT29 tumor xenografts and MC-38 tumor allografts were stained for pimonidazole, pS6 or PCNA.Arrows point to pimonidazole positive, pS6 negative regions.Scale bar, 200 μm.B. Percentage of PCNA positive cancer cells was counted in 10 representative pimonidazole positive and 10 representative pimonidazole negative zones of a 100 × 100 μm surface for three different HT29 and MC-38 tumors for a total of 30 pimonidazole positive and 30 pimonidazole negative zones for each cell line (1 pimonidazole positive and 1 pimonidazole negative area used for counting is highlighted by squares (white in pimonidazole staining and black in PCNA staining) under A and displayed under B).Bar charts represent mean, error bars represent SD. *** p<0.001,Student's t-test.Representative image section below corresponding bar chart, scale bar, 100 μm. Figure 2 : Figure 2: Hypoxic tumor regions proliferate independently of mTORC1 and are resistant to rapamycin.A. Percentage of PCNA positive cancer cells was counted in 10 representative pimonidazole positive and pimonidazole negative zones of a 100 × 100 μm surface for three different HT29 and MC-38 tumors.Representative image section below corresponding bar chart, scale bar, 100 μm.B. MTS cell proliferation assay of HT29 cells cultured in hypoxia (O 2 1 %) or non-hypoxia (O 2 21 %) and treated with DMSO or rapamycin 100 nM was performed after 48 and 96 hours.Bar charts represent mean, error bars represent SD. **** p<0.0001, *** p<0.001, ns=not significant, Student's t-test. Figure 3 : Figure 3: Rapamycin increases carbonic anhydrase IX expression, tumor hypoxia and tumor necrosis and decreases tumor vasculature.A. Percentage of tumor hypoxia (pimonidazole positive surface), tumor necrosis (light pink stained surface in H&E) and CAIX expression (CAIX positive surface) were compared for vehicle and rapamycin treated tumors of HT29 xenograft and MC-38 allograft in 10 representative sections of 3368 × 2668 μm for three different tumors.Tumor vasculature was analyzed by counting CD31 positive vessels in 10 representative sections of 200 × 200 μm for three different tumors.Scale bars, 200 μm.B. mRNA was extracted from 5 tumors and tested for CAIX levels and GAPDH as a control by qRT-PCR.Bar charts represent mean, error bars represent SD. **** p<0.0001,Student's t-test. Figure 4 : Figure 4: Acetazolamide potentiates the anti-cancer efficacy of rapamycin.A. HT29 xenograft growth curves for treatments with vehicle, acetazolamide (40 mg/kg daily), rapamycin (3 mg/kg daily) or a combination of both.B. MC-38 allograft growth curves with treatments as under a. C. Long term effect of acetazolamide/rapamycin treatments on the growth of HT29 xenografts.Arrows denote the start of treatment at 25 mm 3 graft volume.**** p<0.0001, *** p<0.001, n=5/group, Two-way ANOVA. Figure 6 : Figure 6: CAIX inhibition by acetazolamide has an anti-proliferative effect on hypoxic tumor regions.A. B. Percentage of PCNA positive cancer cells was counted in 10 representative pimonidazole positive and pimonidazole negative zones of a 100 × 100 μm surface for three different HT29 (A) and MC-38 (B) tumors respectively.Bar charts represent mean, error bars represent SD. **** p<0.0001, *** p<0.001, ns=not significant, Student's t-test.Representative image section below corresponding bar chart, scale bar, 100 μm. Histology analysis was performed by two researchers blinded to groupings.Percentage of tumor hypoxia (pimonidazole positive surface), tumor necrosis (light pink stained surface in H&E) and CAIX expression (CAIX positive surface) were measured quantitatively using ImageJ 1.46r Threshold Colour Plugin analysis by analyzing 10 representative images of 3368 × 2668 μm for each condition in three different tumors.Tumor vasculature was analyzed by counting CD31 positive vessels in 10 representative sections of 200 × 200 μm for three different tumors.PCNA positive and PCNA negative cancer cells were counted in 10 representative pimonidazole positive and pimonidazole negative vital tumor zones of a 100 x 100 μm surface for three different HT29 and MC-38 tumors.Proliferation rate was calculated by dividing the number of PCNA positive cancer cells by the number of PCNA positive and PCNA negative cancer cells. HT29 cells were plated on 96 well plates (Costar) at 5'000 cells per well, cultured in DMEM in hypoxia O 2 1 % or non-hypoxia O 2 21 % and treated with DMSO or 100 nM rapamycin for 48 and 96 hours.Cellular proliferation was monitored after 48 and 96 hours with CellTiter 96 AQ ueous One Solution Cell Proliferation Assay (MTS) (Promega Corporation) by following the manufacturer's instructions.Absorbance at 492 nm was measured 30 minutes after compound administration.Experiment was performed in quadruplicates and repeated three times.
5,951.4
2016-05-02T00:00:00.000
[ "Biology", "Chemistry" ]
Modeling the combined effect of initial density and temperature on the soil–water characteristic curve of unsaturated soils The soil–water characteristic curve (SWCC) plays an important role in solving the stability and deformation problems of unsaturated soils. In many practical situations, soils are usually experienced by both deformations and thermal conditions. In this interest, the paper proposes a simple and effective model to predict the combined effect of initial density and temperature on the SWCC and to be able to quantify the changes in thermal-hydro-mechanical behavior of unsaturated soils. In the first step, an initial density-dependent SWCC model is presented using the translation principle between particle-size distribution curve and soil–water characteristic curve. In the second part, a non-isothermal model is proposed to predict the effect of temperature on the SWCC. The key to the non-isothermal model is considering five different temperature-dependent functions, which are surface tension, contact angle, particle-size expansion, void ratio, and water density. On the basis of 22 data sets of thermal volume change, this study also developed further a theoretical correlation between void ratio and temperature that is directly related to soil plasticity. It was observed that the value of the thermal void ratio increases as soil plasticity increases, and there is a nonlinear relationship between the plasticity index and the void ratio. Because of this, soils with high plasticity are more susceptible to volume changes caused by temperature fluctuations than soils with low plasticity. A coupled mechanical–thermal model is then produced which is capable to predict separately or simultaneously the effect of temperature and initial density on SWCC. The proposed model is validated against several test data sets available in the literature. The results show that the proposed model has a good performance in predicting the variation in SWCC with arbitrary temperature and initial density. e Void ratio (dimensionless), e 0 Initial void ratio at reference state (dimensionless), e i Arbitrary initial void ratio (dimensionless), f r Surface tension factor (dimensionless), f a Air-water contact angle factor (dimensionless), f R Particle-size factor (dimensionless), f e Void ratio factor (dimensionless), f q Water density factor (dimensionless), k Material-related coefficient (1/ °C), l i Total pore length or height of a cylindrical capillary tube (m), m si Solid mass corresponds to segment i (kg), n i Number of spherical particles (dimensionless), ðn i Þ e 0 Number of soil particles corresponding to reference initial void ratio (dimensionless), ðn i Þ e i Number of soil particles corresponding to an arbitrary initial void ratio (dimensionless), N Number of measured data pairs for the same degree of saturation (dimensionless), u a Pore-air pressure (Pa), u w Pore-water pressure (Pa), r i Pore radius (m), R Particle radius (m), S e 0 Degree of saturation corresponding to the reference SWCC (dimensionless) T 0 Reference temperature in degree Celsius ( °C), T Current temperature in degree Celsius ( °C), V Total volume of a considered soil sample (m 3 ), V s Total solid volume per unit sample mass (m 3 ), V si Volume of a soil particle (m 3 ), Total void volume per unit sample mass (m 3 ), V vi Void volume per unit sample mass corresponds to segment i (m 3 ), a Air-water contact angle (degree), cos a Wetting coefficient (dimensionless), w Matric suction (Pa), q s Particle density (dimensionless), d A calibrated factor (dimensionless), r s Air-water surface tension (N/m), Dh T Immersion enthalpy per unit area (J/m 2 ), DV Thermal volume change of solid particle (m 3 ), b v Volumetric thermal expansion coefficient of solid particles (1/°C), w e 0 Matric suction corresponding to initial void ratio at reference state (Pa), w e i Matric suction corresponding to an arbitrary initial void ratio (Pa) 1 Introduction Unsaturated soils generally exist widespread in nature, particularly in the surface soil layers, embankment, seasonal areas, arid and semiarid areas, or locations with a deep groundwater table.Moreover, climate change in recent years has had a significant influence on the variation in water content of soils due to the evaporation and infiltration, and soils are thus subjected to a cyclic change between saturated and unsaturated states.In the interest of unsaturated soil, the soil-water characteristic curve (SWCC) plays an important role in predicting the hydraulic conductivity, shear strength, water storage function, and soil structure stability [12,52,53,57,61,94,95].Because of the strong relationship between the SWCC and thermalhydro-mechanical properties of soils, it is crucial and attractive to study the variation in SWCC and to be able to quantify the changes in thermal-hydro-mechanical behavior of unsaturated soils.The soil-water characteristic curve naturally is representative of the potential energy variation in fluid phases and is usually described as the relationship between the soil suction and water content or saturation degree.Because the water retention test is an expensive and time-consuming procedure, several empirical models were proposed to plot the SWCC based on a limited number of test points [11,19,27,82].These models were also reviewed and discussed in detail in some other research [40,74,90].In recent years, huge attention was paid to the study of SWCC variation with different factors such as density [55,81,92], temperature [15,56,73], compaction state [51], stress history [23], initial water content [8], adsorption [2], poresize distribution [20], hydraulic hysteresis [37,89], and soil structure [78,93].Among these above-mentioned factors, the initial density and temperature are of more interest to geotechnical engineers, which have a relatively significant influence on the SWCC. The effect of soil density or porosity is one of the most important and attractive factors to be investigated when the soil density has a strong linkage with the mechanical properties of unsaturated soils [45].It is generally known that a small change in the soil density is sufficient to lead to a significant change in the hydro-mechanical behavior of soils.Yet, the soil density is relatively sensitive to variation in the stress state of soils and environmental conditions.In this regard, the study of the density effect on the SWCC becomes a much more attractive problem and is the topic of universal interest.Several researchers thus have attempted to model the effect of soil density on the SWCCs [21,22,60,67,68,77,87,96,98].Several of these existing models have been generated as a result of the present SWCC revision, which includes the introduction of one or more void ratiodependent fitting parameters.Several other SWCC models were created by taking into account the variation in water content as a function of the void ratio.However, even with the same water content, it was found that variations in void ratio led to changes in SWCC [4,41,65,88].Using the soil shrinkage curve to predict SWCC with density change is another method that some researchers have used [35,68].However, models based on this method are typically empirical, with equations constructed based on limited data sets for a certain soil type.Moreover, these models frequently necessitate a large number of fitting parameters. On the other hand, there are numerous geo-environmental situations where unsaturated soils relate to elevated temperature variation such as climate change [47], energy geo-structures [59,60,72], nuclear waste disposal [31,97], geothermal energy storage system [85], high voltage cables buried in the ground [71].In all such emerging problems, the effect of temperature is required to consider in thermalhydro-mechanical behavior analysis of unsaturated soils.The need for studying the effect of temperature on the SWCC is therefore increased today.There are several efforts devoted to investigating the effect of temperature on the SWCC of unsaturated soils [15,28,43,65,66,69].However, several drawbacks and limitations among existing models were found and thus need to be improved.Firstly, almost all existing models considered the effect of temperature on SWCC by focusing only on the surface tension variation while other important factors (contact angle, void ratio, particle size) were neglected.This limitation might cause inaccurate results when predicting the change of SWCC with temperature.In fact, the variation in air-water surface tension with temperature was found to be small as compared to the fluctuation of air-water contact angle [5,75].Recently, several other authors have also dedicated efforts to investigate the effect of temperature on SWCC through considering the variation in air-water contact angle [28,83].Secondly, not much attention was paid to the effect of the thermal expansion of soil and water on the matric suction when the temperature is increased.However, the thermal expansion of particles and water can alter particle orientation and soil structure, potentially affecting suction dramatically.Another significant limitation is that limited consideration was given to the effect of the thermal volume change on matric suction and SWCC.This shortcoming is probably due to the fact that the relationship between temperature and the void ratio was not established well.Consequently, the temperature change continues to remain a great challenge in the field of geoenvironment engineering when the effect of temperature was not addressed thoroughly. Finally, most published research is devoted separately to the effect of density or temperature on the hydro-mechanical behavior of unsaturated soils.However, there is now broad scope where soils are exposed to both deformations and thermal conditions [24,25,34,58,65,99].It is therefore necessary to propose an SWCC model considering the combined effect of initial density and temperature to be able to quantify the change in the behavior of unsaturated soils under both mechanical and thermal conditions. In that interest, this study presents an analytical model to quantify the combined effect of temperature and initial density on the SWCC.In the first step, the effect of initial density on SWCC is studied while the effect of temperature is presented in the second step.Then, two steps are combined to produce a comprehensive model for predicting the coupled effect of initial density and temperature on the SWCC.On the basis of collected twenty-two data sets of thermal volume change, this study also developed further a theoretical correlation between void ratio and temperature that is directly related to soil plasticity.The validity of the proposed model is verified by comparison with several sets of published experimental data.The proposed method is applicable to predict the single effect of temperature and initial density as well as the combined effect of both on the SWCC. 2 Modeling the effect of initial density on the SWCCs Initial density-dependent SWCC model Arya and Paris [3] discovered that the soil-water characteristic curve and the particle-size distribution curve had a similar shape based on test findings on diverse materials.The particlesize distribution curve can then be converted into a soil-water characteristic curve by utilizing an equivalent pore-size distribution curve (PSDC), according to one theory.The cumulative particle-size distribution curve can theoretically be divided into multiple segments, each representing an equivalent pore volume with the same radii (Fig. 1).Furthermore, if the solid volume in each assemblage can be approximated by uniform-size spheres specified by the mean particle radius, then the number of spherical particles corresponding to the solid mass in the ith particle range will be n i .Meanwhile, the volume of the resulting pores can be approximated by uniform-size cylindrical capillary tubes whose radii are related to the mean particle radius.The solid volume and void volume per unit sample mass can be expressed as follows: where V s = solid volume per unit sample mass, V v = void volume per unit sample mass, m s = solid mass, q s = particle density, e = initial void ratio, l i = total pore length, r i = pore radius, R i = soil particle radius, n i = number of soil particles.The total pore length in a natural soil material, however, is determined by the shapes and sizes of the particles.Because the actual soil particles are non-spherical, each soil particle contributes a length greater than the diameter of an equivalent sphere [60].As a result, the number of spherical particles with radius R i needed to trace the whole pore length in natural soil material will be greater than n i .The required particle number therefore will be n d i , in which d ! 1, and the total pore length must be l Replacing l i back into Eq.(1b), the void volume considering the effect of actual particle sizes, shapes, and orientations can be re-written: where d = a calibrated factor that allows minimizing the limitation of spherical particle assumption. By dividing Eqs. ( 2) and (1a) side by side, the relationship between pore radius (r i ) and particle radius (R i ) can be derived as follows: In another way, the soil suction can be estimated by the Laplace-Young equation based on the thermodynamic equilibrium theory [28,62]: where w = matric suction, r s = air-water surface tension, u a = air pore pressure, u w = water pore pressure, a = airwater contact angle, cosa = wetting coefficient.It should be noted that depending on whether the soil state changes along a path of wetting or drying, the contact angle may change.The proposed model, however, uses the reference SWCC to predict how SWCC would change with temperature and density.This indicates that by using the reference SWCC, the impact of hydraulic hysteresis on the soil's retention behavior was incorporated into the proposed model.Replacing Eqs.(3) in (4), the initial void ratio-dependent function of matric suction is given: It is noted that the effect of pore size and sensitivity of density change on suction was considered in Eq. (5) through the parameter n i, in which soils with larger pores require a smaller number of particles and vice versus. Considering a reference case with the initial void ratio e 0 , the expression for matric suction at reference state is: where e 0 = reference initial void ratio, ðn i Þ e 0 = number of soil particles corresponding to reference initial void ratio, w e 0 = matric suction at reference initial void ratio.Considering an arbitrary case with the initial void ratio e i , the expression of corresponding matric suction is: where e i = arbitrary initial void ratio, ðn i Þ e i = number of soil particles at an arbitrary initial void ratio, w e i = matric suction at arbitrary initial void ratio.Dividing side by side of Eqs.(7) to Eq. ( 6) gives: If the considered overall volume of a soil sample remains constant (V = V s ?V v ), the void volume component (V v ) and solid volume component (V s ) will change as the density of the soil sample changes.Variations in the number of soil particles are required because the pore volume is represented by an assemblage of particles.The physical link between void ratio and solid volume can be used to determine the number of soil particles in a soil sample as follows: where V = considered total volume of a soil sample, V si- = volume of a spherical particle.Dividing Eqs.(9b) to (9a) gives the ratio of soil particle number corresponding to two different initial void ratios: Substituting Eqs.(10) into Eq.( 8) and rearranging give: in which the matric suction, w e 0 , can be obtained from the reference SWCC at reference initial void ratio using the equation of Fredlund and Xing [19]as follows: where S e 0 = the degree of saturation for the reference SWCC; a, m, n = fitting parameters. It should be emphasized that, depending on the defined problem and available data, the derived equation can be used to calculate with both the initial void ratio (suction is zero) and the current void ratio.This is because the current void ratio can be linked to the initial void ratio through a volume change equation that defines the relationship between void ratio and suction.As can be seen, the suggested equation provides a direct relationship between suction evolution and void ratio, and its applicability is suitable for both states of void ratio while the current void ratio is frequently used in existing models.Additionally, because the suction was connected to the change in initial void ratio, as a result, the proposed model already incorporated the effect of stress condition into account.Consolidation tests can be used to determine how the initial void ratio of soils may change under various stress states.Then, by adjusting the initial void ratio, it can use the proposed model to apply for any stress condition. The SWCCs at an arbitrary initial void ratio can be predicted by combining Eqs.(11) and ( 12) as follows: It is noted that the proposed model (Eq.13) involves only a new calibrated parameter d, which can be treated as a specific constant for a soil type.The parameter d can be obtained by performing calibration from two SWCC tests or using the following equation: where N = number of measured data pairs for the same degree of saturation. Performance of the proposed model It can be observed that the proposed model has four fitting parameters, in which three fitting parameters (a, n, m) are used to control the shape of reference SWCC, and the fitting parameter d is used to control the effect of initial density.Therefore, at least the measured data of two different SWCC tests are required to calibrate the model.One test data are used to obtain the fitting parameters (a, n, m) for a reference SWCC, and the second test data for a different initial void ratio are used to get the calibrated fitting d.When all four fitting parameters are obtained, the calibrated model can then be used to predict the SWCC at an arbitrary initial void ratio.The procedure of the proposed model application in predicting the SWCC variation with soil density is presented by a flow chart in Fig. 2. In order to check the validity of the analytical model, selected test data sets must include the measured data of at least three SWCCs for three different initial void ratios.To illustrate the calibration procedure as well as to verify the performance of the proposed model, two independent sets of laboratory test data are presented in this section.It should be noted that the success of the calibration procedure, as well as prediction performance of the proposed model, is assessed by using the average relative error (ARE), which is expressed as follows: where, S predicted = predicted degree of saturation, S measured = measured degree of saturation.The first selected data set is from Salager et al. [68,70], which was obtained from clayey sand samples with a broad range of initial densities ranging from 1.35 to 1.95 g/cm 3 .The tested soils have a plasticity index of 9.5 and specific gravity (G s ) equals 2.65. Figure 3a shows the test data of SWCCs for five different values of the initial void ratio.It should be noted that the measured data of the loosest sample corresponding to an initial void ratio of 1.01 were considered as the reference state.A reference SWCC is then plotted using Eq. ( 12), in which the reference fitting parameters were as follows: a = 10 kPa, m = 1.0, n = 0.7.It can be observed that the reference SWCC passes through almost all test points and the calibration is therefore reliable and successful.The calibrated parameter is then calculated using the second SWCC test data, which corresponds to an initial void ratio of 0.86.The calibrated SWCCs for five distinct values ranging from 5 to 65 are shown in Fig. 3b.The calibrated SWCC with a value of 35 generally provides the best match with the measured data, with an average relative error of only 5.2%.Following the selection of d = 35, four fitting parameters of the proposed model are now determined, which allows using Eq. ( 13) to predict the SWCC at an arbitrary initial void ratio.Figure 3c shows a comparison between predicted and measured SWCCs for three different initial void ratios (e i = 0.68, 0.55, 0.44).It can be observed that all three predicted SWCCs are in good agreement with measured data.The average Step 1: Choose a reference data set with reference void ratio (e 0 ) Step 2: Plot reference SWCC based on reference data set Fitting parameters of reference SWCC (a, m, n) are determined Step 3: Use second test data for a different void ratio (e i = e 1 ) to calibrate δ Fix values e 0 and e 1 while increase δ from 0 to find best value of δ Step 4: Check agreement between calibrated SWCC and test data ARE should be used to check ARE = ≤ 10% Step 5: Collect fully four fitting parameters (a, m, n, δ) A complete model is used to predict SWCC at any arbitrary void ratio At least two test sets at different void ratios are required Step 6: Replace e i by an arbitrary void ratio and use Eq. ( 13) to plot SWCC Yes No The reference void ratio (e 0 ) is fixed as a benchmark for all predictions of SWCC Another test data set from Huang et al. [33] is selected to verify further the performance of the proposed model.In this case, the silty sand was tested for six different initial Besides, it is also interesting to note that the effect of initial density on SWCC becomes more significant only for the suction range larger than the air entry value but smaller than 1000 kPa.Soils reaching a critical void ratio at the range of high suction can be considered as a reason behind this trend.The measured against calculated degree of saturation is shown in Fig. 4d, which indicates that the proposed model is successful in predicting the variation in SWCC with initial density. 3 Modeling the effect of temperature on the SWCCs General form for the temperaturedependent suction The effect of initial density on the soil suction was presented in the previous section, which allows considering the changes in the hydro-mechanical state of unsaturated soils.In this section, the effect of temperature on soil suction will be solved so that the changes in the thermohydro-mechanical state of unsaturated soils can be described.According to Eq. ( 5), the change of soil suction depends on five different components: surface tension (r s ), air-water contact angle (cos a), particle size (R i ), water density ðq w Þ, and void ratio (e i ).The total differential of matric suction therefore can be expressed in involving the five independent variables as follows: It is noted that to study the temperature effect on matric suction, many researchers focused mainly on the first term on the right-hand side of Eq. ( 16) while the following four terms received less attention.In the current study, the temperature-dependent model of matric suction is investigated by incorporating simultaneously all five different terms on the right side of Eq. ( 16). As a starting point, the initial temperature state of soils is defined as the reference temperature (T 0 ).An expression of matric suction at reference temperature can be written as follows: where w T 0 is the matric suction at a reference temperature, r s0 is air-water surface tension at the reference temperature, r 0 is particle radius at the reference temperature, a 0 is air-water contact angle at the reference temperature, e 0 is the initial void ratio of soils at the reference temperature, n i0 = number of soil particles at a reference temperature, q w0 = water density at the reference temperature. And the corresponding matric suction at an arbitrary current temperature (T) can be expressed by: where w T is matric suction at the arbitrary current temperature, r sT is air-water surface tension at the current temperature, r T is particle radius at the current temperature, a T is air-water contact angle at the current temperature, e T is the initial void ratio of soils at the current temperature, n iT is the number of soil particles at the current temperature, q wT = water density at the current temperature. Dividing Eqs.(18) to (17) side by side gives: It is mentioned that Eq. (19a) can be re-written under a simplified form as follows: where where f r is the surface tension factor, f a is the air-water contact angle factor, f R is the particle-size factor, and f e is the void ratio factor, f q is water density factor, b v is the thermal volumetric expansion coefficient of solid particle, DT is the temperature increment. Temperature-dependent function of surface tension Surface tension is defined as the tensile force per unit length of the air-water interface.Several linearly empirical equations were established based on the low range of temperature between 0 °C and 40 °C [29].However, Vargaftik et al. [84] presented a set of data for the surface tension up to 200 C, which presents a nonlinear form.Therefore, by using the regression analysis technique, the following equation is proposed for the temperature-dependent function of surface tension: where T is current temperature in degree Celsius (°C). The surface tension of the air-water interface at the reference temperature is expressed as follows: where e 0 is reference temperature in degree Celsius (°C). For the sake of simplicity in presentation, Eq. ( 25) can be re-written as follows: where Replacing Eqs. ( 25) and ( 26) back into Eq.( 20), the surface tension factor f r is derived: 3.3 Temperature-dependent function of contact angle Grant and Salehzadeh [28] assumed that a change in interfacial energy equals a change in interfacial tension. Based on the interfacial energy approach, the following expression is stated: where Dh T is immersion enthalpy per unit area at current temperature T. Rearranging Eq. ( 29), the expression for the temperature dependence of contact angle can be obtained: At the largest immersion enthalpy per unit area, Dh T ¼ Dh max , the change in wetting coefficient approaches zero: On the other hand, the derivative of surface tension with temperature is obtained as follows: Substituting Eqs. ( 31) and ( 32) back into Eq.( 29) and rearranging give: Replacing Eq. ( 33) into Eq.( 30) and conducting an integration procedure for temperature range from T 0 to T, a solution can be derived as follows: Replacing Eq. ( 34) into Eq.( 21), the contact angle factor f a is derived as follows: Temperature-dependent function of particle size The volume of soil particles is supposed to expand with increasing temperature, and therefore, the particle radius is also changed [54,58,59].The volumetric thermal expansion of a solid particle is considered by the equation below: where DV is volume increment of the solid particle due to temperature increase, V 0 is the original volume of the solid particle at a reference temperature, DT is the temperature increment, b v is the volumetric thermal expansion coefficient of solid particles.It is noted that the coefficient of volume expansion b v can be predicted approximately three times the coefficient of linear expansion (b v & 3a).Assuming the soil particles to have a spherical shape, the relation between particle radius and temperature is expressed as follows: Replacing Eq. ( 37) into Eq.( 22), the particle-size factor is derived: Temperature-dependent function of void ratio It is noted that temperature increase usually induces a contraction of the soils, particularly for normally consolidated clays [32,49,50].In the modeling attempts, the volumetric strain due to heating is usually determined by the following relationships: where e T v is the volumetric strain by heating, De T is the void ratio change due to temperature increase, k is a materialdependent coefficient to relate the temperature change to the thermal volumetric strain. Balancing two sides of Eqs.(39a) and (39b) gives: The void ratio factor can be obtained by replacing Eq. ( 40) into (23), which gives: According to Eq. ( 40), the thermal volume change depends on the accuracy of coefficient k.However, many experimental results revealed that the coefficient k depends on the property of soils including OCR, initial void ratio, plasticity index [1,6,7,9,10,13,14,16,18,26,30,38,39,44,46,48,63,64,76,79,80,91].Thermal volume change, on the other hand, is almost independent of suction, according to Uchaipichat and Khalili [80].This implies that the thermal volume change function of unsaturated soil is similar to that of saturated soil.Furthermore, Demars and Charles [17] found that the volume change due to temperature is more sensitive to plasticity index (PI) than any other factor, based on laboratory test results.As a result, the plasticity index is used in this study to establish a relationship between void ratio changes and temperature. In this study, the measured data from twenty-two laboratory test sets, which included three data sets of unsaturated soil and 19 data sets of saturated soil, are used to build an empirical relationship between thermal volume change and plasticity index.A wide range of PI between 5 and 135 were collected for different soil types.Figure 5a shows the relationship between coefficient k and the plasticity index of soils.According to the results, the value of parameter k increases significantly with increasing plasticity index.It should be noted that different test data sets tend to give a quite similar value of coefficient k if the tested soils have the same plasticity index.Furthermore, three data sets of unsaturated soils show that coefficient k is nearly independent of matric suction.As a result, the proposed equation is expected to work well for both saturated and unsaturated soils although it is well accepted that the physical mechanisms behind two kinds of soils are more complicated.Using the regression analysis technique, four different forms of the bestfit curve are plotted in Fig. 5b.It can be observed that the polynomial form of the best-fit curve produces the best agreement with measured data (coefficient of determination -R 2 = 0.906).It is therefore selected to represent the relationship between coefficient k and the plasticity index in this study.The expression for the best-fit curve under the polynomial form is re-written as follows: or, Substituting Eq. ( 42) back into Eq.( 41), the variation in void ratio factor f e with the plasticity index can be determined. Figure 6 shows the fluctuation of f e with plasticity index and temperature.It can be observed that the void ratio factor increases nonlinearly with increasing both plasticity index and temperature.It is expected that the influence of the plasticity index on the void ratio factor becomes more significant for the higher temperature ranges.It should also be noted that the void ratio factor is more influenced by temperature fluctuations for soils of high plasticity than soils of low plasticity. Temperature-dependent function of water density For most geotechnical engineering applications, the density of water under isothermal circumstances is commonly assumed to be 1000 kg/m 3 .However, the experimental data from Lide [42] and Keshky [36] showed that the water density decreases with increasing temperature.The relationship between pure water density in the liquid phase (in kg/m 3 ) and temperature (°C) is: The water density factor f q can be expressed as follows: f q ¼ 999:876 À 0:00588T À 0:004606T 2 999:876 À 0:00588T 0 À 0:004606T 2 0 ð44Þ Discussion of temperature-dependent suction model To illustrate the temperature effect on surface tension, contact angle, particle size, water expansion, and void ratio, Fig. 7 depicts the relationship between matric suction and pore size with temperature fluctuation for four different cases: (1) considering temperature-dependent particle size and water density only (f r = f a = f e = 1), (2) considering temperature-dependent particle size and surface tension only (f a = f e = 1), (3) considering temperature-dependent particle size, surface tension, and void ratio only (f a = 1), (4) considering all five temperature-dependent functions of particle size, water density, surface tension, void ratio, and contact angle.It can be observed that the change in matric suction is least significant for case 1, and most significant for case 4. The results reveal that the temperature-dependent function of contact angle has the greatest influence on matric suction as compared to other remaining functions. For example, at the pore size radius of 1 mm, the reduction in matric suction due to fluctuation in particle size, void ratio, surface tension, and contact angle is 4.1%, 8.31%, 12.9%, and 44.2%, respectively, when the temperature increases from 15 to 100 C. The results indicate that the suction change due to temperature is influenced through the following factors in descending order: air-water contact angle, surface tension, void ratio, particle-size expansion, and water density.It is therefore found that accounting only for the effects of temperature on surface tension is not sufficient to evaluate the suction change with temperature and can cause inaccurate results.It is obvious that the contribution of the contact angle function to the matric suction reduction due to temperature fluctuation is larger than the total contribution of the four remaining functions (surface tension, particle size, water density, and void ratio).Figure 8 demonstrates the changes in matric suction with pore size for two cases: the temperature-dependent contact angle and the temperature-independent contact angle.The comparison results between the two examined cases indicate that changes in matric suction for the temperature-independent contact angle case are relatively small as compared to the temperature-dependent contact angle case.The results, therefore, highlight the importance of considering a temperature-dependent contact angle function for multi-physics numerical simulations involving temperature effects in unsaturated soils.Unfortunately, due to the complex problem of the relationship between contact angle and temperature, not enough attention was paid to this aspect, or it was usually neglected in the existing models.Fig. 8 Relationship between pore size and matric suction for temperature-dependent and temperature-independent contact angles at various temperatures Temperature-dependent soil-water characteristic curve equation It should be noted that the variation in soil suction with temperature can be predicted by using Eq.(19b) when all five temperature-dependent functions were addressed.The isothermal SWCC equation can then be extended to nonisothermal conditions by combining Eqs. ( 12) and (19b), in which Eq. ( 12) is used to plot the reference SWCC.Predicted SWCC at arbitrary temperature can be obtained by using Eq.(19b) to calculate matric suction change with temperature while the degree of saturation is kept constant as the one of reference SWCC.The expression for the nonisothermal SWCC equation is as follows: For sake of verification, the results obtained from the proposed model are compared with experimental data and three existing non-isothermal models [28,66,69].Two sets of laboratory test data are then selected for studying the validity of the non-isothermal SWCC model in this section.The first test data set is obtained from Constantz [15], where Oakley sand was tested at 20 C and 80 C. The sandy soil has fundamental physical properties as follows: specific gravity G s = 2.72, dry density c d = 1.77 g/cm 3 , initial void ratio e 0 = 0.52, and plasticity index PI = 1.The second data set is obtained from Wan et al. [99], which tested on the compacted bentonite at 20 C and 80 C. The fundamental properties of the bentonite are as follows: specific gravity G s = 2.66, dry density c d = 1.70 g/cm 3 , saturated water content w = 10.6%, initial void ratio e 0 = 0.56, and plasticity index PI = 239.It is interesting to note that the case of Oakley sand has a low plasticity index while the case of compacted bentonite has a high plasticity index that may thus be sufficient to assess the performance of the proposed model at a wide range. For the data set of Constantz [15], the SWCC test data at 20 C are used to plot the reference SWCC, in which the fitting parameters are determined as follows: a = 7 kPa, m = 1.8, n = 1.25, d = 10 (sand).It should be noted that the tested soil type was sand with a very low plasticity index, and the thermal volume change is therefore quite minimal.The temperature-dependent function of the void ratio is approximately close to 1, and thus has less effect on non-isothermal SWCC. Figure 9a shows the change of SWCC with temperature for four different cases.According to the results, a temperature increase generally produces lower SWCC.However, it should be emphasized that incorporating all five temperature-dependent functions of particle size, water density, surface tension, void ratio, and contact angle gives a more accurate prediction than only considering a single function.Moreover, it is also observed that the temperature-dependent function of the contact angle has a significant influence on SWCC as compared to other functions.Figure 9b demonstrates the comparison between predicted and measured SWCCs at 80 C. It is worthy to note that the predicted SWCC by the proposed model is in good agreement with measured SWCC, and generally in better accordance compared to the three existing models.The key to this difference comes from the fact that almost all existing models focused only on considering the change of surface tension with temperature while other factors were omitted.It is also found that the model of Grant and Salehzadeh [28] has a better prediction compared to the models of [69], and one of Roshani and Sedano [66].This is because, besides surface tension function, the model of Grant and Salehzadeh (1996) considers further the effect of temperature-dependent contact angle on SWCC while the two remaining models focus only on the variation in surface tension with temperature.On the other hand, the model of Salager et al. [69] has an advance over the other models by considering the volumetric expansion of water which was ignored in the models of Grant and Salehzadeh [28], and Roshani and Sedano [66].This explains why the model of Salager et al. [69] gives an SWCC lower than the model of Roshani and Sedano [66] for the same temperature.Figure 9c presents predicted against the measured degree of saturation.It is noted that the proposed model has a good performance in predicting the variation in SWCC with temperature.All three models underpredict the effect of temperature on the SWCC.The average relative errors of four models are 7.3%, 24.4%, 31.9%, and 43.6% for the proposed model, Grant and Salehzadeh [28], Salager et al. [69], and Roshani and Sedano [66], respectively. The comparison results for the second data set of Wan et al. [86] are shown in Fig. 10, in which the reference SWCC is also plotted by using test data at 20 C. The fitting parameters of this reference SWCC are as follows: a = 1300 kPa, m = 1.34, n = 0.65.It should be noted that the plasticity index of the tested bentonite is very high (PI = 239), and the thermal volume change is therefore much larger than the sandy soil case of Constantz [15].The variation in SWCC with temperature corresponding to four different cases is presented in Fig. 10a.A similar trend is also observed for bentonite when the effect of the contact angle function is more prominent than the three remaining functions.It is therefore concluded that the predictions of temperature-dependent SWCC are more effective and precise with considering the temperature-dependent function of the contact angle.Figure 10b Meanwhile, the effect of temperature on SWCC is less important according to the estimation from the three remaining models.It should be noted that test temperature in the case of Wan et al. [86] was lower than the case of Constantz [15], and the suction range of bentonite is also much higher than that of sand (about 1000 times).Finally, the limited change of surface tension with a low-temperature range is the main reason why the predicted SWCCs obtained from three existing models look very close to the reference SWCC.The results obtained from the proposed model are then extended to compare with measured data for different temperatures, as shown in Fig. 10c.A satisfactory agreement between the proposed model and test data is obtained for different temperatures ranging between 20 C and 80 C. Figure 10d shows the predicted against the measured degree of saturation, which indicates that the proposed model is applicable to predict effectively the variation in the SWCC with temperature.The average relative error of the proposed model, Grant and Salehzadeh [28], Salager et al. [69], and Roshani and Sedano [66] is 2.2%, 15.9%, 23.7%, and 30.2%, respectively.The predictive performance of the proposed model is, therefore, higher than the other remaining models. 4 Modeling the combined effect of initial density and temperature on SWCC Coupled mechanical-thermal SWCC model In many practical situations, unsaturated soils may experience both deformations and thermal variations.To the best knowledge of authors, unfortunately, none of the existing models are found to be able to predict the combined effect of temperature and density variations on the SWCC.With success in proposing models to predict the individual influence of initial density and temperature on the SWCCs in previous sections, this section presents a comprehensive model by combining the two models into a coupled one.By combining Eq. ( 13), and (45), the expression for the coupled model of SWCC can be written as follows: where ðS e 0 Þ T 0 is saturation degree corresponding to reference initial void ratio at the reference temperature, ðw e 0 Þ T 0 c) reference SWCC and replacing the reference matric suction ðw e 0 Þ T 0 by ðw e i Þ T , Eq. ( 46) can be used to predict the SWCC at arbitrary temperature with an arbitrary initial void ratio.Another version of the proposed model for the SWCC using the relationship between volumetric water content and matric suction is shown as follows: where ðh e 0 Þ T 0 is volumetric water content corresponding to reference initial void ratio at the reference temperature, ðh e i Þ T is volumetric water content corresponding to an arbitrary initial void ratio at the current temperature. Validity of coupled mechanical-thermal SWCC model The validation and performance of the proposed model in modeling the combined effect of initial density and temperature on the SWCC of unsaturated soils are verified by comparison with experimental results for three test data sets.The first test data sets are obtained from Romero et al. [65], where the swelling clay (Boom clay) with a plasticity index of 27 was tested under two initial void ratios (0.97 and 0.62) at two different temperatures (22 C and 80 C). The second one is referenced from Imbert et al. [34], in which the Foca clay with a plasticity index of 62 was tested under four initial void ratios (0.7, 0.5, 0.47, 0.42) at three different temperatures (20 C, 50 C, and 80 C).The last one is obtained from Gens [24], where Febex bentonite with a plasticity index of 87 was tested under three initial void ratios (0.68, 0.64, 0.59) at three different temperatures (20 C, 40 C, and 60 C). Figure 11a shows the comparison between predicted and measured SWCCs for the test data sets of Romero et al. [65].It should be noted that the test data of SWCC at an initial void ratio of 0.97 and temperature of 22 C are selected to develop the reference SWCC.The fitting parameters of the reference SWCC is as follows: a = 10 kPa, m = 0.88, n = 0.8, and d = 45.According to the results, the predicted SWCCs are in good agreement with measured data for different cases with variations in both initial void ratio and temperature.Furthermore, it is noted that both experimental results and analytical outcomes agree that the effect of temperature on SWCC can start even from a low suction range (1 kPa in this case) while the effect of initial void ratio becomes more significant only when matric suction reaches a sufficiently large value (100 kPa in this case).Figure 11b shows the measured against calculated degree of saturation for all test points.The average relative error (ARE) for the proposed model, in this case, was only 3.40%, which indicates that the proposed model is successful in predicting the combined effect of density and temperature on the variation in SWCC. Figure 12a demonstrates the performance of the proposed model with comparison to measured data for the test data sets of Imbert et al. [34].Concerning this case, the test data of SWCC at an initial void ratio of 0.42 and temperature of 20 C are used to obtain the reference SWCC.The fitting parameters of the reference SWCC are as follows: a = 114 kPa, m = 1.45, n = 1.1, and d = 25.It can be observed that the proposed model shows an excellent match to the measured data for different temperature and initial void ratios.It is found that both density and temperature have a significant influence on the variation in matric suction, and the combined effect of these two factors leads to a strong change in the SWCC.Furthermore, it is also more interesting to note that the effect of density on the SWCC is more dominant than temperature.Figure 12b shows the measured against calculated degree of saturation for 42 test points.The results indicate that the proposed model has a high performance for predicting the variation in SWCC with temperature and density.The effectiveness of the proposed model is specifically proved by the value ARE, which is about 2.95%. Gens [24] have conducted SWCC tests for three cases, 1) different initial void ratios at the same temperature, 2) different temperatures at the same initial void ratio, and 3) different temperatures at different initial void ratios.With the variety of scenarios, the test data are more useful to evaluate the influence degree between temperature and initial density on the SWCC.The test data of SWCC at an initial void ratio of 0.68 and temperature of 20 C are used to obtain the reference SWCC in this case study.The fitting parameters of the reference SWCC are as follows: a = 50 kPa, m = 1.1, n = 1.5, and d = 25.The comparison between analytical and experimental models for the data sets of Gens [24] is presented in Fig. 13a.It should be noted that the proposed model shows a good match to measured data for different scenarios.It is also more interesting to note that the SWCC is influenced by the density more significantly than the temperature.Figure 13b describes the measured against calculated degree of saturation for 41 test points.The value ARE in this case was only 1.2%, which indicates that the proposed model has a good prediction performance and is proper to be used in predicting the combined effect of temperature and density on the SWCC.This paper presents a study for modeling the combined effect of temperature and initial density on the soil-water characteristic curve of unsaturated soils.Some key points can be summarized as follows: A simple model was proposed to predict the effect of initial density on the soil-water characteristic curve of unsaturated soils.The initial density-dependent model of SWCC was established by translating from the particle-size distribution curve into the soil-water characteristic curve through a pore-size distribution function.The proposed model is simple and effective to be used as only one new parameter is introduced to describe the effect of initial soil density.The comparison with two test data sets showed that the analytical model has a good performance for predicting the initial density effect on the SWCC. A non-isothermal model is also presented to estimate the effect of temperature on the soil suction as well as SWCC. The key to the proposed model is considering five different temperature-dependent functions for surface tension, contact angle, particle-size expansion, water density, and void ratio which leads to a complete method compared to existing models focusing only on surface tension.The results showed that the suction change due to temperature fluctuations is influenced through the following factors in descending order: contact angle, surface tension, void ratio, particle-size expansion, and water density.The validity of the proposed model was verified against some experimental data available in the literature.It has been shown that the non-isothermal model can capture well the temperature effect on the SWCC. On the basis of 22 data sets of thermal volume change, this study also developed further a theoretical correlation between void ratio and temperature that is directly related to soil plasticity.It was observed that the value of the thermal void ratio increases as soil plasticity increases, and there is a nonlinear relationship between the plasticity index and the void ratio.Because of this, soils with high plasticity are more susceptible to volume changes caused by temperature fluctuations than soils with low plasticity. A coupled mechanical-thermal SWCC model is then proposed by combining the initial density-dependent and temperature-dependent models that allow predicting the SWCC at any arbitrary initial density and temperature.The coupled mechanical-thermal SWCC model is presented in a simple form, which is more convenient to be applied in practice.The comparison results for three independent test data sets proved that the proposed model has a good performance to predict the variation in SWCC with both temperature and initial density. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. 2 Fig. 1 Fig. 1 Conceptual model for translation between SWCC and PSDC Fig. 2 Fig. 3 Fig. 2 Flow chart for the procedure of proposed model application Fig. 4 Fig. 4 Fig. 4 Performance of proposed model with clayey sand (measured data source from Huang et al., 1998): a measured data and reference SWCC; b calibration curves with different values of d; c comparison between predicted and measured SWCCs with d = 55; d calculated against measured degree of saturation ( 1 )Fig. 5 Fig. 5 Thermal volume change coefficient k as a function of plasticity index: a source of collected test data from literature; b different forms of best-fit curves particle size & water density only Temperature-dependent particle size, water density & suface tension only Temperature-dependent particle size, water density, suface tension & void ratio only Temperature-dependent particle size, water density, suface tension, void ratio & contact angle Fig. 7 Fig. 6 Fig. 7 Contribution of different temperature-dependent functions to variation in matric suction Fig. 9 Fig. 9 Comparison outcomes for Oakley sand (measured data source from Constantz, 1991): a variation in SWCC considering various temperature-dependent functions, b comparison between predicted and measured SWCCs, c Predicted against measured degree of saturation Fig. 11 Fig. 11 Performance of coupled mechanical-thermal SWCC model (measured data source from Romero et al. 2001): a Comparison between predicted and measured SWCCs; b Calculated versus measured degree of saturation Fig. 12 Fig. 13 Fig. 12 Comparison between predicted and measured SWCCs (source data from Imbert et al. 2005): a comparison between predicted and measured SWCCs; b calculated versus measured degree of saturation The calibrated SWCC with a value of d = 55 is found to have an excellent match to the measured data, with an average relative error of 4.1%.It should be emphasized that the reference and calibrated curves demonstrate an excellent match with test data, and the calibration procedure, therefore, was reliable and effective.For this data set, the following four fitting parameters were found to be the best:a = 65 kPa, m = 1.2, n = 2.1, d = 55.The proposed model after calibration can then be used to find the SWCC at an arbitrary initial void ratio.It is noted that only the reference void ratio is set because the proposed model predicts SWCC with density based on the reference SWCC.When the void ratio decreases from 0.525 to 0.426, the appropriate SWCCs are obtained in Fig.4c.It can also be observed that the SWCC results predicted by the proposed model are in good agreement with measured data for all four different void ratios (e i- = 0.49, 0.474, 0.454, and 0.426).The average relative error of the proposed model among 68 test points is 4.2%.
11,993.8
2023-06-21T00:00:00.000
[ "Environmental Science", "Engineering" ]
A trace inequality for Euclidean gravitational path integrals (and a new positive action conjecture) The AdS/CFT correspondence states that certain conformal field theories are equivalent to string theories in a higher-dimensional anti-de Sitter space. One aspect of the correspondence is an equivalence of density matrices or, if one ignores normalizations, of positive operators. On the CFT side of the correspondence, any two positive operators $A,B$ will satisfy the trace inequality $\operatorname{Tr}(AB) \leq \operatorname{Tr}(A) \operatorname{Tr}(B)$. This relation holds on any Hilbert space ${\cal H}$ and is deeply associated with the fact that the algebra $B({\cal H})$ of bounded operators on ${\cal H}$ is a type I von Neumann factor. Holographic bulk theories must thus satisfy a corresponding condition, which we investigate below. In particular, we argue that the Euclidean gravitational path integral respects this inequality at all orders in the semi-classical expansion and with arbitrary higher-derivative corrections. The argument relies on a conjectured property of the classical gravitational action, which in particular implies a positive action conjecture for quantum gravity wavefunctions. We prove this conjecture for Jackiw-Teitelboim gravity and we also motivate it for more general theories. Introduction The Anti-de Sitter/Conformal Field theory correspondence (AdS/CFT) [1] predicts exact equivalence between appropriate conformal field theories and their dual bulk string theories. Using the bulk to reproduce detailed properties of specific CFTs typically requires using intricate properties of the stringy description. However, it is often the case that fundamental properties of CFTs can already be seen in the approximation where the bulk theory is described by semiclassical gravity, perhaps coupled to appropriate matter fields. Important examples of such properties include CFT microcausality, strong subadditivity of entropy, and the fact that larger regions of the CFT define larger algebras of observables. In particular, these features are associated with results for asymptotically locally anti-de Sitter (AlAdS) bulk spacetimes satisfying the null energy condition. The corresponding bulk results are, first, that any causal bulk curve between boundary points is deformable to a causal curve lying entirely within the boundary [2], second that strong subadditivity holds for HRT surfaces [3], and third that entanglement wedges nest appropriately [3][4][5]. Quantum effects in the bulk typically preserve such properties so long as they satisfy the quantum focussing conjecture [6]. The goal of the present work is to study the dual bulk implementation of the CFT inequality Tr so that the argument of the trace on the left-hand-side is also a positive operator. Recall that positive operators are self-adjoint by definition [7], and that 'positivity' requires the eigenvalues to be non-negative. In (1.1) and (1.2), we use the subscript D to denote the non-gravitational CFT dual of a bulk theory, and we write Tr D to emphasize that the trace is the standard trace on the D side of the duality. In particular, Tr D denotes the familiar operation computed by introducing any orthonormal basis |i⟩ on the Hilbert space for D and performing the sum For simplicity of presentation we confine ourselves to the AdS/CFT context below, but similar discussions clearly apply to other gauge/gravity dualities as well, such as those described in e.g. [8,9]. To avoid infrared divergences, we assume D to be defined on a spatially compact spacetime. Since we consider path integrals dual to some Tr D B, we may then take all of our Euclidean boundaries to be compact. The inequality (1.1) is easily proven using standard Hilbert space operations in D. One first notes that the inequality is trivial when Tr D (B) = +∞, so this leaves only the case of finite Tr D (B). One then observes that, when Tr D (B) is finite, the positivity of B requires the operator B to have a largest eigenvalue B max . We then simply choose the {|i⟩} in (1.3) to be eigenstates of C with eigenvalues C i ≥ 0 and write ( 1.4) Indeed, this argument also shows that the bound (1.1) is quite weak, and that it is saturated only when B, C are both proportional to a common projection of rank one. For B = C, this latter observation is equivalent to the familiar statement that the purity of a density matrix is 1 only when the density matrix is pure, and thus when it is proportional to a projection of rank one. While the bound (1.1) may be weak, stronger bounds typically involve further details of the spectrum of B, C and are thus more difficult to study. One example is the bound B max Tr(C) also derived in (1.4). Another is the even stronger von Neumann trace inequality Tr(BC) ≤ i B i C i , where we have now introduced the full set of eigenvalues B i of B, and both C i and B i have been ordered so that C i ≥ C j and B i ≥ B j when i ≥ j. These more intricate bounds on the CFT trace are correspondingly more awkward to study on the gravitational side of the AdS/CFT duality. However, despite its weakness, the bound (1.1) can be used to derive fundamental consequences. One example is the fact that the algebra B(H) of bounded operators on any Hilbert space is a type I von Neumann factor. This can be shown by first noting that the commutant of B(H) is trivial, so that B(H) must be a factor of some type. One then considers any projection P and sets C = B = P . Since P 2 = P , the bound (1.1) requires Tr(P ) ≥ 1 for any P . In contrast, when factors of some type other than I are present in a von Neumann algebra, any faithful normal semi-finite trace on the algebra will always assign arbitrarily small traces to some family of projections having arbitrarily small trace [10]. This result is a key motivation for our study. Our goal here is to show how (1.1) arises from the bulk point of view. In doing so we will work at the level of the semiclassical approximation to the Euclidean path integral for a low-energy bulk effective theory. The semiclassical bulk description will necessarily involve gravity, but our analysis will not depend on the details of any UV completion. Now, in fact, gravitational path integrals that include sums over topology are generally not dual to single CFTs as they fail to factorize over disconnected boundaries (see e.g. the classic discussion of [11]). However, if a non-factorizing bulk path integral makes sense, we expect it to behave like those discussed in [12,13] where the path integral decomposes into a sum over so-called baby universe α-sectors in which factorization holds; see also [14,15] for earlier discussions of this idea. We then expect (1.1) to be satisfied separately in each α-sector. Furthermore, if an inequality of the form (1.2) holds in each member of an ensemble then, so long as the ensemble has non-negative probabilities to realize each of its members, a similar inequality will hold for ensemble averages. We might write this averaged inequality in the form (1.5) Here it will of course be important that the right-hand side is a single ensemble correlation function and not a product of ensemble averages. It is therefore interesting to understand if a given bulk theory satisfies a corresponding inequality, which we might write in the form (1. 6) to sum only over Euclidean spacetimes in which every bulk point is connected by some path to a point on the (asymptotically locally AdS) boundary at which boundary conditions are specified. This coincides with the traditional treatment of the gravitational path integral in AdS/CFT [17]. We adopt this normalization in all discussions below. Our discussion begins with an analysis of simple cases and simple bulk theories in section 2. We first show that, in the context of black hole thermodynamics, standard results for either Jackiw-Teitelboim (JT) [18,19] or Einstein-Hilbert gravity imply the bulk version of the inequality (1.1) to hold at all orders in the semiclassical expansion and at all orders in any perturbative higher-derivative corrections. By referring to the black hole thermodynamics context above, we mean that the operators B, C in (1.6) are both functions of the Hamiltonian H and that relevant path integrals are dominated by Euclidean black hole saddles. We focus on the simple case of pure JT gravity, where there are no other operators to consider and where all bulk saddles contain black holes. However, the arguments in 2.1 also apply to black hole thermodynamics more generally. We then also show that, due to the simplicity of JT gravity, for any UV completion where the path integral can be studied in the manner described by Saad, Shenker, and Stanford [20] and for interesting semiclassical limits, (1.6) holds even when the theory is coupled to matter (so long as the matter coupling is dilaton-free and the matter satisfies a positive action condition). In Section 3, we then proceed to discuss (1.6) for operators B, C in more general theories and more general phases (perhaps not dominated by black holes). Since the inequality (1.1) holds for any quantum theory, it will be enlightening to look again at the D side of the duality to see how the standard non-gravitational Euclidean path integral for D can be used to provide an alternate derivation of (1.6) at leading order in the semiclassical approximation (without yet invoking any possible gravitating bulk dual). This is done in section 3.1, where we assume only that each member of the relevant class of Hamiltonians for the theory is bounded below and that the theory is 2nd order in derivatives. Higher derivative corrections can then be incorporated perturbatively. We do not study quantum corrections in this context since we will treat such corrections by a different argument in our discussion of gravitating bulk duals. The above discussion sets the stage for us to address the general derivation of (1.6) from the gravitational side of the duality. We open this discussion in section 3.2 by showing that the basic outline of the non-gravitational argument of section 3.1 can be easily adapted to the gravitational context. However, a crucial ingredient in the non-gravitational argument turns out to be the fact that the non-gravitational Euclidean action is bounded below. This property is of course well-known to fail off-shell in gravitational theories; see e.g. [21]. We deal with this issue in stages by phrasing the argument of section 3.2 in terms of a series of assumptions about the gravitational path integral which will turn out to be plausible (and, in some cases, provably true) despite the fact that the gravitational action is unbounded below. The main discussion focuses on two-derivative theories of gravity (like Einstein-Hilbert or JT), though arbitrary higher derivative corrections are allowed so long as they are treated perturbatively. When our assumptions are satisfied, the argument establishes (1.6) at all orders in the semiclassical expansion. We then separate out discussion of the status of those assumptions (and the associated issues surrounding the conformal factor problem of Euclidean gravity), placing this material in section 4. These assumptions imply a new positive action conjecture that generalizes the original conjecture of Hawking [21] in several ways. We prove this conjecture to hold in JT gravity minimally-coupled to positive-action matter, and we also motivate the conjecture more generally in section 5. Finally, we close in section 6 with a summary and brief discussion of future directions. Simple cases and simple theories Asymptotically Anti-de Sitter Jackiw-Teitelboim gravity is a simple 2d toy model of gravitational systems in which many explicit computations are possible. Section 2.1 considers the theory of "pure" JT gravity which contains only a metric g and a dilaton ϕ, with no additional matter fields. The addition of matter fields will be discussed in section 2.2 using ideas from [20]. We use conventions in which the pure JT action on a disk takes the form Here ϕ 0 is a constant, h is the induced metric on a boundary, and K is the extrinsic curvature (a scalar, since the boundary is one-dimensional) defined by the outward-pointing normal. The detailed boundary conditions to be used will be described in appendix A.1. The Trace Inequality in gravitational thermodynamics: Jackiw-Teitelboim gravity and beyond Pure JT gravity has no local degrees of freedom, and in fact there is very little to compute. In particular, our 2-dimensional bulk must have a 1-dimensional boundary, so the only compact connected boundary is a circle. The JT path integral is then specified by the constant ϕ 0 in (2.1), a function ϕ b on this circle having dimensions of length and prescribing boundary conditions for the dilaton, and the length β of the circle (as defined using a rescaled unphysical metric). However, one may change the conformal frame at infinity without changing the path integral and, by doing so, one can reduce the general computation to the case where ϕ b is any given positive constantφ b [22]. This result is reviewed in appendix A.4. As a result, in the rest of this section we simply choose some fixed value of this constantφ b and consider all circles to be labelled only by their length β in the corresponding conformal frame. If one were to treat JT gravity non-perturbatively at a level where it is equivalent to a theory of a single matrix (see e.g. [13]), then (1.6) would follow by using this equivalence to transcribe into bulk language the quantum mechanical derivation of (1.1) given in section 1. Here we instead wish to focus on semiclassical treatments of JT gravity. The idea is to gain insight into calculations we can also hope to control in higher dimensional gravitational theories. In higher dimensions, the semiclassical limit can be characterized by taking G → 0. However, in JT gravity two of the above-mentioned parameters, ϕ 0 andφ b , each take on aspects of the role played by G in higher dimensions. As a result, JT gravity admits various notions of semiclassical limit. One of these is given by taking ϕ 0 large withφ b fixed, while another is the limit of largeφ b with fixed ϕ 0 . Establishing (1.6) in both cases then clearly also establishes the desired result in any limit where both ϕ 0 andφ b become large. As one can see from (2.1), the entire affect of ϕ 0 is to weight spacetimes in the path integral by e 4πϕ 0 χ , where χ is the Euler character of the spacetime. As a result, since we use the normalization described in the introduction in which disconnected compact universes do not contribute, the limit ϕ 0 → ∞ with all other parameters held fixed is dominated by disk contributions. Furthermore, there is a factor of e 4πϕ 0 for each disk. The number of disks is determined by the number of circular boundaries for the path integral, which is necessarily larger 1 on the right-hand-side of (1.6) (whereM bc † cb † has been split intoM b † b andM c † c ) than on the left (where M bc † cb remains intact). The right-hand-side is thus clearly larger than the left-hand-side in the limit where ϕ 0 is taken large with all else fixed. This establishes the desired inequality (1.6) in this context. However, as mentioned above, we can instead choose to keep ϕ 0 finite and to study the limitφ b → ∞ with all else fixed (including the inverse temperature β, which we henceforth require to be finite). Let us use Z(β) to denote the path integral defined by a circular boundary of length β. In the dual quantum mechanical system one would write Since the only objects we can compute are linear combinations of (2.2) with different values of β, the only operators in D that we can study are functions of H, where H is the Hamiltonian of D. The change of conformal frame mentioned above that removes the dependence on general functions ϕ b is sufficiently local that no further operators would have been found for more general (position-dependent) choices ϕ b . For later use, we note that at leading semiclassical order (with the above normalization of the action) one finds [22] We will first discuss the trace inequality (1.6) for the simple case where B = e −β 1 H and C = e −β 2 H . In doing so, it will be useful to recall that a partition function Z(β) allows one to compute an associated entropy S(β) using It turns out that the condition S ≥ 0 is sufficient to derive the trace inequality (1.6) in the current context. To see this note that, for B, C as above, our (1.6) is equivalent to ln Z(β 1 + β 2 ) ≤ ln Z(β 1 ) + ln Z(β 2 ). (2.5) In other words, (1.6) is equivalent to the requirement that ln Z(β) be a superadditive function of β. However, since β > 0, non-negativity of (2.4) is equivalent to stating that β −1 ln Z(β) decreases monotonically for β ∈ (0, ∞). We may thus derive (2.5) from such non-negativity as follows: Furthermore, for S > 0 we see that (1.6) becomes a strict inequality. Since (2.4) is in fact positive, it follows that (1.6) is satisfied at this order for B = e −β 1 H , C = e −β 2 H . Indeed, we see that the inequality cannot be saturated for any β 1 , β 2 . As a result, when treated perturbatively, higher order corrections cannot lead to violations of (1.1). Now, as described in [23], negative entropies do arise in non-perturbative regimes if one takes the path integral for the no boundary baby universe state to compute the entropy (2.4). But the entropies in individual super-selection sectors (which are dual to entropies of individual CFTs) should be positive even at the non-perturbative level; see again the discussion of superselection sectors, ensembles, and factorization in section 1. Furthermore, as described there, we would still expect the trace inequality to hold in the form (1.6), which requires us to include contributions from spacetime wormholes on the right-hand-side. Including simple such wormholes did indeed ameliorate the negative entropy issues discussed in [23]. Consistency with the dual matrix ensemble of [20] then requires that the remaining issue to be resolved by the inclusion of higher topologies and the appropriate non-perturbative completion, though this remains to be explicitly analyzed. The simple argument given above for the case B = e −β 1 H , C = e −β 2 H can be extended to general functions of H constructed as linear combinations of the e −βH . A straightforward way to do so is to realize that, in any dual quantum-mechanics theory, we may first analytically continue e −βH in β to construct the operators e itH , whence for each real E one may define the operators In (2.7), since we wish δ(H − E) to be the inverse Laplace transform of e −βH , we should take the contour of integration Γ to be above any singularities that may arise. This is equivalent to choosing the contour for β = −it to be to the right of any singularities. Linearity then implies the traces of the operators (2.7) to be given by the Fourier transform of Z(−it) := Tr D e itH , which we take to be given by the continuation of (2.3); i.e. Combining these results yields where as in (2.7) the contour is taken to lie above the singularity at t = 0, though we may otherwise choose it to run along the real t-axis. For fixed real E > 0, in the limit of largeφ b , we may then evaluate the remaining integral using the leading-order stationary phase approximation. The exponent is stationary at t = ±i 4π 2φ b /E, where the integrand on the far right of (2.9) takes the values e ±4π √φ b E . Since our contour lies above the singularity at t = 0, it is then clear that the contour can be deformed to run through the saddle at t = i 4π 2φ b /E, which would in any case give the larger saddle-point contribution. A more detailed analysis also shows that the steepest ascent curve from this saddle lies along the positive t-axis and thus intersects the contour of integration as desired 2 . As a result, the leading semiclassical approximation gives where in the first step we have dropped a factor of 1/2π since it is subleading at leading semiclassical order. In (2.10) we have used the symbol θ(E) to denote the usual Heaviside step function and S(E) is defined as As usual the definition (2.11) is made because, at leading semiclassical order, the expectation value of E in the ensemble defined by e −βH is given by and because solving (2.12) for β yields the relation β = 2π φ b /E used in (2.11). Given any function f on the real line, we can now define an operator f (H) via Let us do so for two functions f 1 , f 2 , and let f 1 f 2 denote the product of these functions. As a consequence of (2.7) one finds which further implies that we have (2.15) where (f g)(H) is again defined as in (2.13) but using the function (f g)(E) := f (E)g(E) in the integral over E. Linearity and (2.9) then require and Furthermore, for fixed f 1 , f 2 , in the limit of largeφ b these integrals can be performed in the saddle point approximation. Since each integral is real, it must be dominated by the largest saddle on the positive real axis. Denoting the relevant saddle-point values of E as E 1 , E 2 , E 12 , we then have (2.17) But since E 1 dominates the first integral, we have , and similarly . Thus we find Finally, we note that in our semiclassical limit the quantity S(E 12 ) will be large and positive for any fixed E 12 > 0. Thus we have e −S(E 12 ) ≪ 1. In particular, this factor will be much more important than any subleading terms in our approximations. This then establishes the trace inequality (1.6) for arbitrary f 1 , f 2 at leading order in the limit of largeφ b . In fact, we have shown the inequality to hold strictly in this limit, in the sense that it cannot be saturated. As a result, quantum corrections cannot violate the trace inequality (1.6) when they are treated perturbatively. The same is true for any perturbative higher-derivative corrections one may wish to add. While the above discussion was phrased in terms of JT gravity, the only properties we actually used were that B, C were chosen to be functions of H and that S(E) > 0 for all E. As a result, the same arguments also apply verbatim to such B, C when D is dual to a higherdimensional gravity theory so long as each path integral is dominated by a black hole saddle (so that S = A/4G > 0). The one subtlety is that, due to the Hawking-Page transition, if one wishes to see the fact that S(E) > 0 at small E one will need to appropriately analytically continue to low energies the large-energy saddles that dominate the high-temperature phase; see e.g. the discussion of microcanonical entropy from the gravitational path integral in [25]. The above analysis considered choices of operators B, C that each define connected parts of the AlAdS boundary. For example, the operators B = e −β 1 H and C = e −β 2 H are each associated with a single line-segment on the boundary. As described in the introduction, when e.g. B instead contains several disconnected components, it may be important to include spacetime wormholes in the analysis. Since such cases quickly become cumbersome, we will not attempt to treat them via explicit calculations of the form described above. However, such cases are readily included in the analysis of section 2.2 below. Adding matter using the Saad-Shenker-Stanford pardigm Jackiw-Teitelboim gravity turns out to be a simple enough theory that we can also establish (1.6) for the case where it has a dilaton-free coupling to positive-action matter. Here we require only that the theory admit a UV-completion in which the JT path integral can be treated in the manner described by Saad,Shenker,and Stanford in [20] and in which a semiclassical treatment remains valid. By a 'dilaton-free coupling,' we simply mean that the matter action depends only on the JT metric and does not depend on the dilaton. Furthermore, the specific positive-action matter requirement is that the classical matter action should be bounded below by zero on all asymptotically AdS 2 Euclidean spacetimes with arbitrary topology and arbitrary number of boundaries. The present discussion will require certain details regarding the formulation and properties of JT gravity. In order not to distract from the main thrust of this work, we relegate the more technical analyses to appendix A. However, we recall here that the action for pure JT gravity on an asymptotically AdS 2 spacetime takes the form We refer the reader to appendix A for a discussion of the boundary conditions under which (2.19) can be used, though in this section we will refer to the associated conditions as the requirement that the AdS 2 boundary be "smooth." As in section 2.1, there are various possible notions of a semiclassical limit for this theory. And, again as in section 2.1, the effect of ϕ 0 is to weight topologies by a factor of e 4πϕ 0 χ so that taking ϕ 0 large with all else fixed immediately yields (1.6). We will thus follow section 2.1 in showing that (1.6) also holds when we takeφ b large while holding all other parameters fixed (including both ϕ 0 and the inverse temperature β). If one examines the action (2.19), one sees that it is strictly linear in ϕ. This remains true in the presence of our dilaton-free matter couplings. Following [20], it is then natural to define the "Euclidean" JT path integral by integrating ϕ over strictly imaginary values, so that the integrals over ϕ give delta-functions of R + 2. The path integral then reduces to an integral over (real) R = −2 constant curvature Euclidean spacetimes and over any matter fields. The bulk term in (2.19) vanishes for such spacetimes, so the JT action becomes a sum of boundary terms -one for each S 1 connected component of the boundary -each of which that can be written in terms of a Schwarzian action [22]. We will assume the remaining integrals to have a good semiclassical limit in the sense that, whenφ b → ∞ with all else fixed, the result of the integrals is well approximated by e −I 0 where I 0 is the minimum-action configuration of metrics and matter fields that satisfy the boundary conditions. Due to the simplicity of JT gravity, with this assumption one can again quickly see that (1.6) holds asφ b → ∞. The point is that, as shown in appendix A.4, in this context the action is bounded below by −4πϕ 0 χ − j 4π 2φ b /β j where χ is the spacetime's Euler character and β j is the preiod of the jth circular boundary. Since χ ≤ n for any 2d manifold with n circular boundaries, we thus find a topology-independent lower bound − j 4πϕ 0 + 4π 2φ b /β j . It should be no surprise that this is just the action of the Euclidean black hole with inverse temperature β. Furthermore, since the matter action is non-negative, the full coupled matter-plus-gravity action is also bounded below by − j 4πϕ 0 + 4π 2φ b /β j . It turns out that this is also a good estimate of the actual minimum of the action at largeφ b . To see this, let us use g min to denote the Poincaré disk metric (representing Euclidean JT black holes with periods β j ) that saturates this bound. We then choose any matter field configuration that satisfies the required boundary conditions when taken together with g min . Sinceφ b is just an overall coefficient in front of the Schwarzian action (A.36), our g min cannot depend onφ b . Our full field configuration thus has action I = − j 4πϕ 0 + 4π 2φ b /β j + I matter,0 where the last term is manifestly independent ofφ b . But the true minimum I 0 of the full action must be less than or equal to this result, showing that I 0 satisfies In particular, in the limit of largeφ b we see that I 0 will scale linearly inφ b with coefficient − j 4π 2φ b /β j . The inequality (1.6) then follows immediately by noting that if β B , β C , β BC are the lengths of the relevant boundaries then we must have β BC = β B + β C , and thus also Again, since this also forbids saturation of the trace inequality at this order, (1.6) must continue to hold in the presence of both higher-order semiclassical corrections and perturbative higher-derivative corrections. The trace inequality in general semiclassical gravity theories The remainder of this work is devoted to arguing that the bulk analog (1.6) of the trace inequality (1.1) should hold in general semiclassical theories of gravity and for general operators B, C. After the discussion of sections 1 and 2, this should not be a surprise. When the path integrals are dominated by black holes, it is natural to expect the behavior seen in section 2 where very general computations are semiclassically controlled by black hole thermodynamics whence (as described in section 2.1) the trace inequality follows from positivity of the microcanonical entropy S(E). Furthermore, when the gravitational path integrals are not dominated by black holes, it is natural for the bulk to behave like a standard quantum system so that the argument in (1.4) should apply. Putting these together should be expected to yield an argument for general theories of gravity. What makes this discussion subtle is our lack of understanding of the Euclidean gravitational path integral, as well as the associated conformal factor problem that makes the Euclidean action unbounded below (see e.g. [21]). We therefore address these issues in stages below. We first return to the non-gravitational setting in section 3.1 and find a path integral derivation of our trace inequality in the semiclassical limit. We then show in section 3.2 that the broad outline of this non-gravitational path-integral argument can be transcribed to the gravitational case, so long as one makes a number of assumptions concerning both the gravitational action and the treatment of the conformal factor problem. We will take care, however, to formulate such assumptions in such a manner that they remain plausible despite the above-mentioned fact that the gravitational action is not bounded below. This plausibility argument is then made in section 4, which in particular shows these assumptions to imply a new positive-action conjecture that extends the original positive-action conjecture of Hawking [21,26] in several ways. The conjecture can then be verified for JT gravity with minimal (or, more generally, dilaton-free) couplings to positive-action matter. Furthermore, in simple contexts, for more general theories it can be related to positivity of the Hamiltonian with general boundary conditions. Trace Inequality from the Semiclassical Euclidean Path Integral: The nongravitational case We now return to the non-gravitational context to describe a Euclidean path integral derivation of (1.1) in the semiclassical limit. We restrict ourselves to the case where both B and C are defined by real sources. We will also assume that, with any fixed set of allowed real-valued sources, the corresponding Euclidean action is both real and bounded below. We will also assume that each such path integral is dominated by a saddle (or by a set of saddles) in the semiclassical limit, and in particular that the action of any configuration is always greater than or equal to the action of the dominant saddle. The latter will be true under assumptions that prevent the action from being minimized on the boundaries of the space of allowed field configurations. Such assumptions are reasonable since regions near such boundaries typically have infinite measure, so that minimizing the action on such boundaries typically causes the path integral to diverge. We leave the exceptional cases open for future study. Our restriction to real sources means that our path integrals are manifestly real. Such integrals can only be dominated in the semiclassical limit by real saddles corresponding to global minima of the action over the contour of integration. In particular, all saddles (as well as more general configurations) discussed below will lie on the contour of integration that defines the path integral. This means that no issues of contour deformations can be relevant to our discussion. We will also consider only cases where the right side of (1.1) is dominated by a single saddle. Contexts with more than one equally-dominant saddle typically describe phase transitions; see e.g. the classic discussion of Hawking and Page [27]. Close to such a phase transition one typically finds that formally non-perturbative effects associated with additional saddles and/or mixing between saddles are more important than perturbative corrections; see e.g. recent discussions in [28,29] for condensed matter analogues and in [5,[30][31][32]. We thus save further consideration of this case for future study. It will be enough for our purposes to work at leading order in the semi-classical expansion, so that the path integral is approximated by e −I , where I is the Euclidean action of the dominant saddle. This is to be a model for the leading-order analysis of (gravitating) bulk duals in section 3.2, though in that section we will use a rather different method to include quantum corrections. We have already mentioned that we are interested in quantum field theories with sources, say in d Euclidean spacetime dimensions. In quantum field theories, the UV structure of the Hilbert space can be sensitive to the choice of sources, and in fact to various timederivatives of such sources when d is large. We will therefore further restrict discussion to the case where the Hilbert space of interest can be thought of as being defined by a set of time-translation-invariant boundary conditions that define an associated "cylindrical" Euclidean manifold C ∞ = B × R with translation-invariant sources, where B is an appropriate (d − 1)-dimensional manifold and × denotes the Cartesian product of metrics as well as of the underlying manifolds. The equivalent definition in Lorentz signature would thus restrict us to considering Hilbert spaces defined by static metrics 3 . In particular, note that the Z 2 relection symmetry of the R factor implies that C ∞ also admits a Z 2 "time-reversal" symmetry. We refer to C ∞ as the infinite cylinder. It will be useful to define corresponding finite cylinders C ϵ = B × [0, ϵ], and to define B 0 to be the boundary of C ϵ at the zero of the interval. Due to the time reversal symmetry of C ∞ , we will need this definition only for positive values of ϵ. We emphasize that this is a restriction on the background fields that define the Hilbert space and not on the background fields used to construct any particular state. Of course, the two must be compatible, so in practice we will consider only states that are prepared by manifolds-with-boundary where a neighborhood of each boundary contains a rim diffeomorphic to a finite cylinder C ϵ (or, more properly, to the part of this cylinder associated with the half-open interval [0, ϵ)). In most of this section we will also assume that the Euclidean action I for our theory is the integral of a local Lagrangian L, with L built from fields and their first derivatives only. In particular, assume that I = M L without additional boundary terms at ∂M . Of course, many potential such boundary terms can be absorbed into L by the addition of a total divergence (so long as it is again built from fields and their first derivatives). The above condition implies that the equations of motion are of no more than 2nd order, but at the end of this section we will see how to include perturbative higher derivative corrections. We begin by considering positive operators B and C. Such operators may always be written B = b † b, C = c † c for appropriate b, c. We assume that b, c are computed by some (perhaps complex) linear combination of Euclidean path integrals with real sources. For simplicity, we begin with the case where each operator b, c is computed by a single Euclidean path integral, saving non-trivial linear combinations for later. However, in contrast to section 2.1, we now include the case where the boundary associated with Tr(BC) may be disconnected. We use the notation M b , M c , M B , M C to denote the manifolds over which the path integrals for b, c, B, C are performed, together with the appropriate set of sources. To remind the reader of this, we will sometimes refer to M b , M c , M B , M C as source manifolds. In particular, we take such source manifolds to specify the full set of background structures (e.g., spin-structures, etc.) which are required to define the theory 4 . A pictorial representation of such a source manifold is provided on the upper left panel of figure 1. Since b is an operator on a given Hilbert space, we may take the boundary ∂M b to be the disjoint union of two parts ∂M in b , ∂M out b describing the input and output of b, and where the sources near both ∂M in b and ∂M out b are those associated with the given Hilbert space. In particular, we assume that M b may be chosen to be some C ϵ in some neighborhood of each of ∂M in b and ∂M out b so that the boundary ∂M in b (or ∂M out b ) agrees with B 0 . As mentioned above, we refer to this as requiring M b to have rims, and we make analogous requirements for the source-manifolds with boundary associated with any operator discussed below. Since the theory is non-gravitational, one should regard points on M b and C ϵ as being labelled. The agreement of We then define a closed source manifoldM b (without boundary, so that is then computed by the path integral over the resultingM b ; see figure 1. It is useful to take the definition of M b to include the partition of ∂M b into ∂M in b and ∂M out b . We may then describe b † as being computed by the path integral over but keeping all sources unchanged; see again figure 1. Corresponding assumptions and definitions will also be made for any other operator c and the associated M c , ∂M c , andM c . In particular, since b and c both act on the same Hilbert space H, the inputs of b must be identical to those of c, and similarly for the outputs. As a result, the labelling of points on B 0 also defines source-preserving diffeomorphisms We may then use ϕ bc (or ϕ cb ) to define the source manifold M bc (or M cb ) by identifying the input of M b with the output M c (or vice versa). The path integral over M bc then clearly computes the operator bc. Using both ϕ bc and ϕ cb to make identifications allows us to further construct the closed source manifoldM bc , over which the path integral computes Tr(bc). Note that swapping b and c would define the source manifold M cb associated with the operator cb, but thatM cb =M bc so that Tr(bc) = Tr(cb) as expected; see figure 2 below. In order to derive (1.6), we will thus need to compare the Euclidean path integrals over Since the sources on M b are real, they must agree with those on M b † up to an appropriate diffeomorphism. Thus M B admits a Z 2 symmetry that exchanges the b and b † regions of Before proceeding, we pause to comment on our depiction of the source manifolds M b , etc. in the accompanying figures. Below, we will wish to show features of individual configurations of fields that appear in the path integral (in addition so the source features shown thus far). Such information makes the figures correspondingly more complicated, so that it is useful to simplify our illustrations in other ways, even at the expense of making them more abstract. See figure 3 below for the dictionary relating figures thus far to those that will appear in the . This figure illustrates the scheme that we adopt below to depict source manifolds along with particular configurations on such manifolds that arise in the associated path integrals. It also shows the connection to the old scheme. Left panel: A configuration for the path integral performed over a source-manifold M b will be described by the coloration assigned to that manifold. The left hand side of the equivalence uses the old scheme with a two-dimensional depiction of the source manifold, but now with the coloration added. The right hand side uses the new scheme in which the source manifold is drawn as one-dimensional and the × is the only indication of structure associated with the sources. Right panel: A configuration for the path integral performed over the closed source-manifoldM b is shown using both schemes. remainder of this work. At leading semi-classical order, comparing path integrals overM B ,M C , andM BC is equivalent to comparing the dominant saddles σ B , σ C , and σ BC on these source manifolds. We begin with an observation, which we codify as a lemma to facilitate future reference: where d is computed by a Euclidean path integral over a source manifold M d . The source manifoldM D =M d † d then clearly enjoys a Z 2 reflection symmetry as discussed above. This symmetry is in fact preserved by any saddle σ D that dominates the path integral overM D . In cases where the minimum value of the action is shared by several saddles, the symmetry is preserved by at least one such σ D . To prove Lemma 1, we begin by considering an arbitrary saddle σ 0 D for Tr D. Let k 0 d be the part of this saddle on M d , and let k 0 Here we use the symbol k (with subscripts) for configurations that are not given to us as saddles of the original path integrals. If the saddle σ 0 D breaks the Z 2 symmetry of the background fields, then k 0 d and k 0 d † will not be related by Φ d . In this case we can use k 0 d and k 0 d † to build new configurations for the path integral overM D . In particular, as illustrated in figure 4 and gluing this to k 0 d defines a new configuration k d D for the path integral overM D . We can also define a corresponding k d † D by gluing k 0 d † to its image under the inverse of Φ d . Note that k d D , k d † D will not generally solve the equations of motion at the surface where M d meets M d † , and in fact that derivatives of fields in k d D , k d † D will not generally be continuous at these surfaces. The key observation needed to prove our Lemma is then that the action S is additive, in the sense that This additivity follows from the fact that S = L, together with the requirement that L depend only on fields and their first derivatives. The point is that σ 0 D must be smooth since it solves the Euclidean equations of motion with smooth boundary conditions. (We assume these equations to be elliptic.) Furthermore, by construction, the values of fields at the boundaries of k 0 d will agree with those at the boundaries of Φ d [k 0 d ], and similarly for k 0 . This means that the fields defined by either k d D or k d † D are continuous. And while the first derivatives may not be continuous at the boundaries of M d and M d † , they have well-defined limits from each side; i.e., the first-derivatives have at worst step-function discontinuities. This means that L is bounded, and in particular has no delta-function contributions at the boundaries between the M d and M d † regions of M D . It follows that the action does indeed satisfy (3.1)-(3.3). Comparing these equations shows that the smaller of I(k d D ) and I(k d † D ) must be less than or equal to I(σ 0 D ), and that it is strictly less if I(k 0 are not saddles then they cannot minimize the action and the action of the dominant saddle must be even smaller; i.e., We are now in a position to prove our main result (1.6) at leading semi-classical order. As stated above, at this order, comparing path integrals overM B ,M C , andM BC is equivalent to comparing the dominant saddles σ B , σ C , and σ BC on these source manifolds. In particular, at this order we have Tr B = e −I(σ B ) , Tr C = e −I(σ C ) , and Tr BC = e −I(σ BC ) . (3.5) The pieces can then be recombined to make a pair of Z 2 -symmetric configurationsk B andk C (shown at right) that contribute to the path integrals for, respectively, Tr B and Tr C. We are interested in the caseM B =M b † b ,M C =M c † c , so that we may also writẽ M BC =M d † d using d = bc † and the fact thatM b † bc † c =M cb † bc † (which in turn follows from the fact that our gluing operation is invariant under cyclic permutations). Applying Lemma 1 toM BC =M d † d , we may take σ BC to have a Z 2 reflection symmetry that exchanges d = cb † and d † = bc † . We may then cut the saddle σ BC into the 4 pieces We may now glue the resulting k b and k b † together to define a Z 2 -symmetric configuratioñ k B = k bb † for the path integral overM B ; see figure 5. Note that Z 2 symmetry requires the fields on k B to be continuous at the boundaries between k b and k b † . Thus the action I(k B ) is well-defined. We may also define the analogous configuration k C = k cc † for the path integral overM C , whose action I(k C ) is again well-defined. Furthermore, as in the proof of Lemma 1 we have I(k B ) + I(k C ) = I(σ BC ). And since the dominant saddles σ B , σ C must have actions no larger than I(k B ), I(k C ), using the leading semiclassical approximation (3.5) we find as desired. However, so far we have required each operator b, c to be given by a single path integral. We would also like to discuss operators given by a linear combination of path integrals. I.e., we wish to allow b = i b i , and c = i c i , where each of the b i , c j are single path-integrals as above. This generalization is straightforward when there is a single dominant saddle σ BC for Tr(BC), which we as usual assume to be the case. To proceed, we first write We then note that a dominant saddle σ BC for Tr BC will be associated with some particular term b i b † j c k c † l (and also with its equally-dominant adjoint if this term is not real). As in the proof of Lemma 1 we may then cut this saddle into two pieces corresponding to . Gluing each of these to its reflection 5 then defines Z 2 -symmetric configurations for the diagonal terms given by path integrals Since the original saddle σ BC was dominant (with some action I(σ BC )) and our saddles are real, additivity again requires that the pieces corresponding to , and that the new Z 2 -symmetric configurations are saddles with actions equal to I(σ BC ). (This follows from the analogue of (3.4) when σ 0 D is dominant so that the left and right hand sides are equal.) As a result, the new saddles may be used as dominant saddles. Using either saddle in this way then reduces us to consideration of a single Z 2 -symmetric saddle, whence the rest of the argument follows as above. It now remains to study higher-derivative corrections about a saddle σ BC . We will use I 0 to denote the original action without such corrections. We will treat corrections to I 0 perturbatively, which means that at each order the saddles are found by solving a 2nd derivative equation of motion with sources determined by the lower-order parts of the solution. At the off-shell level needed for our argument, at each order n in perturbation theory we may treat the action as being a 2nd order polynomial functional I n of the fields. The coefficients of the quadratic terms in this functional are given by the second variations of I 0 about the lower order saddle. The action is thus positive semi-definite for perturbations about a dominant saddle. The coefficients in the linear term are given by varying the higher derivative corrections at linear order. We shall assume that any zero-modes of the linearized theory about σ BC are associated with symmetries of I 0 that are preserved by the higherderivative terms, and thus in particular that they are preserved by the linear term in I n . It then follows that I n is bounded from below, and that it is in fact minimized at the desired saddle. Furthermore, in a perturbative treatment there can be no danger of violating (3.6) unless it is saturated by the classical 2-derivative theory. As a result, if we again suppose that the dominant 2-derivative saddle for each path integral is unique 6 , then we need only consider perturbations around saddlesσ B ,σ C , σ BC , whereσ B ,σ C are constructed from σ BC by using 5 The result of such a reflection gives a corresponding piece of the analogous saddle for the adjoint term. Thus we can also think of our construction as pasting of from the ijkl term together with pieces of the jilk term. 6 In particular, it is unclear how to control the possibility that two a priori unrelated saddles might have precisely the same action at the two-derivative level, but might then have this degeneracy lifted by higher derivative corrections in a manner unfavorable to our argument. We leave consideration of this interestingbut-finely-tuned possibility open for future study. the above cut-and-paste procedure. In particular, at any point p B on the B side ofM BC , the sources for the first correction will precisely match those at a corresponding point p B onM B , and similarly on any point p C on the C side. It follows that the setting for computing the first-order corrections is of precisely the same form as the zero-order problem defined above, where the sources for this problem onM BC may again be reproduced by gluingM B toM C . We may thus argue in exactly the same way that (3.6) also holds at first order in higher derivative corrections, and in fact iteratively at every higher order as well. The trace inequality for semiclassical gravity We now turn our attention to bulk gravity theories. For convenience of notation we continue to suppose that the bulk theory is dual to a hypothetical non-gravitational theory D, or to an ensemble of such theories, though in the end our arguments will be entirely in the bulk. In particular, the arguments apply even to bulk theories for which dual non-gravitational boundary theories are not known to exist. On the D side of the duality, the path integrals for ⟨Tr D (B), Tr D (C)⟩, and ⟨Tr D (BC)⟩ may be formulated as path integrals over source manifoldsM B ,M C , andM BC just as in section 3.1, and in particular withM We again confine the discussion to the case where the boundary conditions defined by any such source manifold are real. By this we mean that, if the formalism allows complex bulk configurations k to be considered, then if k B satisfies the boundary conditions defined byM B , so does the complex conjugate k * B . We also again require each of the associated source-manifolds-withboundary M b , M c to have C ϵ rims for some ϵ > 0 as described in section 3.1. The AdS/CFT dictionary of [17] (or its extrapolation to ensembles) then states that ⟨Tr D (B) Tr D (C)⟩, and ⟨Tr D (BC)⟩ may equivalently be computed as bulk path integrals that sum over all bulk spacetimes with boundary conditions determined by the above source man-ifoldsM B ,M C , andM BC . As stated in the introduction, we take this bulk path integral to be normalized by dividing by the no-boundary state or, equivalently, we take the bulk path integral with boundary conditions set by someM to sum only over bulk spacetimes in which every point can be connected to the boundary atM . Disconnected closed universes are not included in our sum. At leading semiclassical order the basic structure of our arguments will closely follow those of section 3.1. In particular, we will again restrict to situations far from phase transitions by requiring the bulk path integral for ⟨Tr D (B) Tr D (C)⟩ to have a single dominant saddle. However, we will need to deal with two new inter-related further complications. The first is that gravitational actions are generally not bounded below on the space of real Euclidean fields. The second is that, as a result of the issue just described, the so-called "Euclidean" gravitational path integral cannot actually be taken to be defined as the integral over real Euclidean fields. To allow the casual reader to focus on the big picture, the present section presents an overview of the argument for (1.6) and deals with the above issues by simply making assumptions about the gravitational path integral as needed. We then return to address those assumptions in section 4. We begin by discussing the leading-order result, in which we take each path integral to be dominated by a smooth bulk saddle. Higher order corrections will be discussed later, at the end of this section. We are free to call the dominant bulk saddles for each path integral σ B , σ C , and σ BC in direct analogy with section 3.1. We thus have (3.7) In particular, we suppose that the semiclassical approximation to our path integral satisfies the following assumption: For a bulk path integral specified by boundary conditions defined by a (compact) closed source manifoldM with real sources, we assume that there is a class of configurations KM such that i) the bulk fields described by any k ∈ KM are continuous, ii) the bulk action I is a real-valued functional on KM and iii) in the semiclassical limit, the path integral is dominated by a real saddle σ ∈ KM that minimizes the action I over KM . In particular, we have I(σ) = min k∈KM I(k). Furthermore, if KM includes a complex configuration k, then the complex conjugate k * also lies in the same KM . We similarly assume that the class KM is invariant under a corresponding action of any symmetry ofM . As described in section 3.1, this assumption is naturally satisfied in contexts where the Euclidean path integral over real fields converges. In that case, KM is just the class of real field configurations. But this is not generally the case in gravitational theories. We thus emphasize that Assumption 1 does not require KM to contain all real configurations, and in fact does not generally require configurations k ∈ KM to be real at all (except for the dominant saddle in the semiclassical limit). Instead, it requires only that I(k) be real-valued on KM . This flexibility will be useful in later sections where we discuss several different possible choices of KM associated with different approaches to defining the path integral. Since the present section addresses a general theory of gravity, we will make no attempt to write down an explicit action. However, we do require the action to satisfy the following additivity property which can be checked in any particular theory (and which will be discussed for familiar examples in section 4.2): Assumption 2. Consider two boundary source manifoldsM bc ,M c † d , whereM bc is given by cyclicly gluing together the input and output boundaries of some M b , M c , and whereM c † d is similarly constructed from M c † , M d . Given any real bulk saddles σ bc ∈ KM bc , σ c † d ∈ KM c † d , we assume there is a prescription for slicing σ bc into two pieces k b , k c , and of similarly slicing σ c † d into two pieces k c † , k d which satisfy (3.8) Figure 6. Real bulk saddles σ bc and σ c † d for Tr D bc and Tr D c † d can be cut into pieces. Note that the cutting step generally creates new boundaries (dashed lines) not restricted by the asymptotically AdS boundary conditions. When the data on the two new boundaries are related by a diffeomorphism Φ, the pieces can be pasted together to make a bulk configuration k bd for Tr D bd. We emphasize that this condition need only be satisfied by real saddles and not by general configurations in KM bc and KM c † d . We also assume that the slicing prescription preserves any symmetries of the bulk saddle σ bc . As shown in figure 6, the cutting of σ bc into k b , k c generally creates new boundaries not restricted by properties ofM bc . As a result, we require the action I to be defined on such bulk configurations. This may require the specification of appropriate boundary terms at the new boundaries. It may also require corner terms where the new boundaries intersectM bc ; see related discussions in [33][34][35]. Furthermore, suppose that there is a diffeomorphism Φ from the new boundaries of k c to the new boundaries of k c † that preserves the values of all bulk fields (though which need not preserve normal derivatives of bulk fields). Then we can glue the new boundaries of k b to those of k d to create a new configuration k bd ∈ KM bd whose actionwe assume to be I(k bd ) = I(k b ) + I(k d ); (3.9) see again figure 6. As in section 3.1, we first consider the case where each object in (1.6) is computed by a single path integral, returning later to cases that involve linear combinations of path integrals. We will need the analogue of Lemma 1 for the gravitational context: Figure 7. When D = d † d, an arbitrary bulk saddle σ 0 D for Tr D D can be reflected to give another saddle σ R D for the same bulk path integral. We can cut σ 0 One of these must have action less than or equal to that of the original saddle σ 0 D . If σ 0 D was dominant, Assumption 1 would imply that we have now constructed two Z 2 -symmetric saddles both having precisely the same action as σ 0 D . Lemma 2. Consider an operator where d is computed in D by a Euclidean path integral over a source manifold M d with real sources. The source manifoldM D =M d † d then clearly enjoys a Z 2 symmetry as discussed above. This symmetry is in fact preserved by any bulk saddle σ D that dominates the bulk path integral for Tr DMD . In cases where several allowed bulk saddles share this minimum value of the action, the symmetry is preserved by at least one such σ D . Using assumptions 1 and 2, we can give a proof of Lemma 2 that directly parallels the proof of Lemma 1 in section 3.1. The argument is depicted in figure 7. We first consider an arbitrary saddle σ 0 D that lies in KM D and use assumption 2 to divide it into k 0 d † , k 0 d . Note that the values of the bulk fields on the new boundaries of k 0 d † agree with those on the new boundaries of k 0 d by continuity on Σ 0 D ; see again Assumption 1. But we can use the reflection symmetry ofM D to construct a reflected saddle σ R D that again lies in KM D , and which we then divide into k R d † , k R d . Because k 0 d and k R d † are related by the reflection symmetry, the field values on their new boundaries agree. We may thus paste these pieces together to form a new configuration k d D ∈ KM D with an explicit bulk reflection symmetry, and we may also construct the analogous k d † D ∈ KM D from k R d and k 0 d † . As in section 3.1, the additivity properties (3.8), (3.9) applied to the current pieces then imply that either . As a result, if σ 0 D is a dominant saddle, then either k d D or k d † D must be an equally-dominant saddle that preserves the desired symmetry. Lemma 2 will soon allows us to prove the trace inequality (1.6) at leading semi-classical order. As stated above, at this order we have Tr D B = e −I(σ B ) , Tr D C = e −I(σ C ) , and Tr D BC = e −I(σ BC ) . (3.10) We are interested in the caseM B =M b † b ,M C =M c † c , so that we may also writẽ M BC =M d † d using d = bc † and the fact thatM b † bc † c =M cb † bc † . (This follows from the fact that our gluing operation is invariant under cyclic permutation of the parts to be glued). Applying Lemma 2 toM BC =M d † d , we may take σ BC to have a Z 2 symmetry that exchanges d = bc † and d † = cb † . We may then cut the saddle σ BC into the two pieces k B , k C associated with the M B , M C source manifolds. Furthermore, since Assumption 2 required the slicing prescription to preserve symmetries of the original bulk saddle, the boundaries ∂k B , ∂k C will be invariant under corresponding reflection symmetries. This will in particular be true for the new boundaries created by slicing σ BC into parts. We now make a final monotonicity assumption regarding our action. Assumption 3. Consider again the setting of assumption 2 and the pieces k b , k c described there. Let ∂ new k b denote the new boundaries of k b created by slicing σ bc in two; i.e., these are the boundaries of k b that were not boundaries in σ bc . We assume that when k b is invariant under a Z 2 symmetry, we may use this symmetry to glue any point of ∂ new k b to its image to define a configurationk b ∈ KM b associated with the bulk path integral for Tr D (b); see figure 8 below. We further assume that this gluing operation does not increase the action. In other words, we assume Before using this assumption, it is important to explain why the relation (3.11) is natural, and in particular why it is not generally an equality. In the nongravitational discussion of section 3.1, the topology of any saddle was always that of the corresponding source manifold M that defined the relevant path integral. As a result, the equivalent of ∂ new k b always separated cleanly into input and output boundaries. In particular, in the non-gravitational case the reflection symmetry that acted on ∂ new k b had no fixed points. Thus the equivalent ofk b was always smooth. In the gravitational context, we may indeed expect that I(k b ) = I(k b ) whenk b is smooth. However, the dimensionality of the bulk saddle is typically greater than that of the source manifold. In particular, the topology of source manifold no longer dictates the topology of the bulk. As a result, the construction ofk b from k b will introduce a conical deficit of π at any fixed points of the reflection symmetry that lie on the new boundary ∂ new k b of k b ; see again figure 8. In such cases, the monotonicity assumption (3.11) amounts to the condition that conical deficits give a non-positive contribution to the Euclidean gravitational action. This is consistent with the standard sign choices for the Euclidean Einstein-Hilbert and Jackiw-Teitelboim actions (see e.g. [21]). In fact, for later purposes it is useful to add a further assumption which essentially states that the contribution of conical deficits is strictly negative: Assumption 4. Consider again the context of Assumption 3. If the reflection symmetry of k b has fixed points on ∂ new k b , then we in fact have (3.12) Assumption 4 will be of use when we consider perturbative corrections, though we will set it aside for now. Returning to the main argument, we may use the above procedure to construct configurationsk B ,k C for Tr D (B), Tr D (C) from the pieces k B , k C that were cut from σ BC . We then apply Assumption 3, replacing b in (3.11) by either B or C. Finally, we apply the minimization assumption (Assumption 1) to find that the dominant saddles The above reasoning suffices for the case where B, C each represent a single boundary condition. The remaining case where they are linear combinations of boundary conditions then follows just as at the end of section 3.1. Starting with a general saddle for any term in the sum associated with Tr D (BC), Assumptions 1 and 2 imply that there is another saddle with equal or lesser action that is associated with one of the diagonal terms in the sum. And this diagonal term can then be used as above to construct saddles σ B , σ C for Tr D B, Tr D C that satisfy (3.13). So, again, the desired result holds. We may also address perturbative quantum corrections to (1.6). This turns out to be straightforward since we take the path integral for ⟨Tr D (BC)⟩ to be dominated by a single saddle. A key point is that perturbative quantum corrections are explicitly given by quantum field theory in the curved spacetime backgrounds defined by our saddles. In particular, they are computed by non-gravitational path integrals, or by path integrals that include only perturbative gravitons, of the general form described in section 3.1, but where the leadingorder bulk saddles σ BC , σ B , σ C now play the role ofM BC ,M B ,M C from section 3.1. A second key point is that, in any strictly perturbative framework, quantum corrections can lead to violations of (1.6) only if this inequality is saturated at leading semi-classical order. Since we assume unique saddles σ BC , σ B , σ C for, respectively, Tr D (BC), Tr D (B), and Tr D (C), our arguments above require that σ B , σ C can be obtained by slicing σ BC into two pieces, each of which is separately invariant under a reflection symmetry. The saddle σ B is then obtained by using the reflection symmetry of the B piece to glue together any new boundaries created by the slicing operation. The saddle σ C is also constructed in the analogous fashion. Furthermore, the above argument also shows that strict saturation of (1.6) at leading semiclassical order requires one to be able to reconstruct σ BC from σ B , σ C by a procedure directly analogous to that buildingM BC fromM B ,M C (shown previously in figure 5); i.e., σ B = k)B and σ C = k C . Moreover, the objects k B , k C used to constructk B = σ B ,k C = σ C now play the roles of M B , M C from section 3.1. Thus, for example, the quantum correction to Tr D B is precisely Tr pert bulk B pert bulk where this is a trace over the perturbative bulk Hilbert space and where B pert bulk is an operator on that Hilbert space. Furthermore, the reflection symmetry of σ B implies the operator B pert bulk to be positive. Indeed, since the analogous statements hold for C and BC, the simple quantum-mechanical argument given by (1.4) can be used to write Tr pert bulk (B pert bulk C pert bulk ) ≤ (Tr pert bulk B pert bulk )(Tr pert bulk C pert bulk ). (3.14) Thus we see that, at each order in the semiclassical expansion, quantum corrections cannot induce violations of (1.6). Since we have not specified the gravitational theory, it is not natural at this stage to separate out discussions of higher derivative terms. We will instead address related issues in section 4 when we discuss the status of our assumptions in various classes of theories. The status of our assumptions in general gravitational theories We now turn to a discussion of assumptions 1-4 from section 3.2 for general theories of gravity. These assumptions require the semiclassical approximation to Euclidean quantum gravity to be determined by minimizing an action functional over appropriate classes KM of spacetimes satisfying boundary conditions given by someM . At first glance, this idea may appear to be famously false in Euclidean Einstein-Hilbert gravity due to the conformal factor problem [21]. In particular, one can find smooth Euclidean bulk spacetimes satisfying arbitrary boundary conditions that make the Euclidean Einstein-Hilbert action arbitrarily negative, so that minimal action configurations do not exist. Any attempt to establish the assumptions used in section 3.2 must thus begin with some viewpoint on how the conformal factor problem is to be addressed. We have already discussed the Saad-Shenker-Stanford paradigm for JT gravity (with dilaton-free matter couplings) in section 2.2, where we showed that it leads to the desired trace inequality in the semiclassical limit. While we see no simple argument that such a paradigm satisfies our assumption 3 or assumption 4, we argue below that our assumptions are in fact satisfied within two other (perhaps overlapping) paradigms for dealing with the conformal factor issue. The first, which we call the Gibbons-Hawking-Perry paradigm, is a hypothetical non-linear generalization of the contour rotation prescription described in [21] for linearized fluctuations about Euclidean Schwarzschild. The second follows [36] in taking the Lorentzian path integral to be fundamental, evaluating the Lorentzian path integral with "fixed-area boundary conditions," and arguing that the result reduces to an integral over Euclidean spacetimes that are on-shell up to the presence of conical singularities. We discuss each of these paradigms in turn below. The first discussion (section 4.1) is necessarily brief and schematic due to the hypothetical nature of the supposed extension of known results. More details will be provided when considering the second paradigm in section 4.2. This will allow key elements of the assumptions either to be proven or to be reformulated as precise conjectures concerning the classical action which should be amenable to future mathematical and numerical studies. The Gibbons-Hawking-Perry Contour Rotation Paradigm As shown long ago by Gibbons, Hawking, and Perry [21], at the linearized level for familiar cases one can obtain physically reasonable results (see also [37][38][39][40][41][42][43][44][45][46][47]) by 'rotating the contour of integration.' This in fact means that one defines the path integral to integrate over some nontrivial contour Γ in the space of complex metrics. In the linearized cases mentioned above, the action is real and bounded-below on the Γ chosen in [21]. This last point contrasts with the Saad-Shenker-Stanford paradigm which also uses a non-real contour, but on which the action is manifestly complex. In the case considered by Gibbons, Hawking and Perry, the action also diverges to +∞ in all asymptotic regions of Γ. As a result, the action on Γ is necessarily minimized at some finite smooth saddle-point that dominates the path integral in the semiclassical limit. If this same structure persists in the non-linear theory, then Assumption 1 is clearly satisfied if we simply redefine configurations on Γ to be 'real' for the purposes of that assumption. See also the discussions of contour rotation for the full theory in [48,49]. Now, Assumptions 3 and 4 require the full space KM to admit configurations with conical singularities. If the saddles are known to be smooth, and if the construction of Γ respects symmetries of the boundary conditions, then we can take KM to be given by those spacetimes lying on Γ which can be formed from smooth spacetimes by applying a single cut-and-paste operation of the type described in Assumption 2. This choice allows us to restrict attention to spacetimes that are 'not too wild' and on which we can hope to have some control over the action as a function on KM . Furthermore, if the specification of the desired contour Γ is sufficiently local in spacetime, then cutting spacetimes γ 1 , γ 2 ∈ Γ into pieces and pasting them together to build a new configuration γ will also naturally yield γ ∈ Γ. As a result, the above definition of KM would then be manifestly invariant under such operations. So long as the spacetimes satisfy appropriate boundary conditions, Assumption 2 will then be satisfied if our action includes appropriate boundary terms. Explicit discussions of such boundary conditions and boundary terms for JT and Einstein-Hilbert gravity will appear in section 4.2 below. It thus remains only to discuss Assumptions 3 and 4. As described between (3.11) and (3.12), for Euclidean geometries in Einstein-Hilbert gravity the two sides of (3.11) differ only by contributions from the conical singularities. A standard calculation shows that this gives an extra factor of e −A/4G on the left hand side, where A is the area of the conical singularity. Similarly, in JT gravity (normalized as in (2.19)), the difference is a factor of e −4πϕ evaluated at the singularity. As a result, in either of these theories, so long as A (or ϕ) is positive on the contour Γ, these assumptions will be satisfied as well. As a final comment on this paradigm, let us address the question of perturbative higher derivative corrections to either JT gravity or Einstein-Hilbert gravity. Rather than attempt to discuss Assumptions 1-4 for the full path integral with higher-derivative corrections, we will instead take perturbative treatment of such terms to mean that their corrections to the two-derivative theory are computed by first finding saddles that would dominate the semiclassical computation in the two-derivative theory, and then using the higher derivative terms to compute perturbative corrections to the relevant actions. So long as we suppose that the dominant 2-derivative saddle for each path integral is unique, we may then argue that the trace inequality (1.6) is preserved by higher derivative corrections in direct analogy with the non-gravitational discussion at the end of section 3.1. The only comment needed to promote that argument to the gravitational context is to again note that assumption 3 (applied at the level of the two-derivative theory) means that we may indeed confine discussion of higher derivative corrections to perturbation theory about saddle points for Tr D (B) , Tr D (C) in the two-derivative theory that are constructed from the two-derivative saddle for Tr D (BC) using the cut-and-paste procedure above. As in the non-gravitational discussion at the end of section 3.1, we leave open for future study the more general but finely-tuned case where the saddles fail to be unique. Euclidean path integrals from fixed-area Lorentzian path integrals While the discussion of the Gibbons-Hawking-Perry paradigm in section 4.1 was straightforward, it also relied on the conjectured existence of a hypothetical contour Γ with certain properties. Furthermore, since the form of the presumed Γ is not known, it is difficult to perform further checks within that approach. In contrast, we shall see that the paradigm described in [36] allows a more detailed discussion of assumptions 1-4 and also presents more well-defined opportunities for further consistency checks. For lack of a better name, we will refer to the approach of [36] as the Lorentzian fixed-area paradigm. The Lorentzian fixed-area paradigm The treatment of [36] considered the special case of computing partition functions Z(β) = Tr(e −βH ) for time-independent gravitational systems. However, it did so by taking the Lorentz-signature path integral to be fundamental, and to be defined as an integral over spacetimes that were both real and Lorentz-signature up to the presence of certain codimension-2 singularities that one may call "Lorentzian conical singularities" following [50,51]; see also [52][53][54][55][56][57], as well as [48,49] and [58][59][60] for earlier arguments that treating the Lorentzian formalization as fundamental is essential to resolving the Euclidean conformal factor problem. As a result, much as in section 2.1, Z(β) was first written as an integral transform of distributional quantities that one may call Tr(e itH ). Due to their distributional nature, the quantities Tr(e itH ) are generally not well-defined for any fixed t, though integrating over t gives a well-defined result. The suggestion of [36] was to first integrate over the real Lorenz-signature metrics while holding fixed the areas of the codimension-2 conical singularities. In practice, this was done using the stationary phase approximation. It is an interesting point that the Jackiw-Teitelboim and Einstein-Hilbert actions define good variational principles with such fixed-area boundary conditions [61], and that the associated saddles may have arbitrary conical singularities at the fixed-area surface (as suggested in [62,63]); similar statements also hold in the presence of perturbative higher derivative corrections [61]. Evaluating the above-mentioned integral transform then led to a result that could be expressed as a final integral over Euclideansignature metrics that satisfied the Euclidean equations of motion everywhere away from the fixed-area codimension-2 conical singularities, and which were thus known as Euclidean fixed-area saddles. Since the saddles were parameterized by the here-to-fore-fixed areas of the conical singularities, the final integral was simply an integral over the associated areas. For simple gravitational partition functions, this process was shown in [36] to yield the standard results. Let us therefore imagine that, in the semiclassical limit, a similar paradigm can be applied to any Euclidean path integral. In particular, given any operator B in the dual theory D, we imagine that Tr D B can be computed semiclassically as where A ≥ 0 parameterizes the possible codimension-2 areas of a set of conical singularities, I A is an action that gives a good variational principle when the area A is fixed, and the argument s A denotes the real Euclidean saddle ofĨ A having the lowest actionĨ A (s A ) for the given value of A that is consistent with satisfying the boundary condition at infinity. This paradigm can also be applied to JT gravity with matter (where a codimension-2 surface is a discrete set of points) by replacing the area A by the value of the dilaton ϕ summed over conical singularies. Here we assume ϕ 0 + ϕ ≥ 0 at each singularity. In writing (4.1), it is assumed that the integral on the right-hand-side converges and that no further contour rotations are required. This is not at all obvious from a cursory study of the gravitational action. However, as argued in [63] (see also [62]), the quantities e −Ĩ A (s A ) / Tr D B are expected to represent the probabilities p(A) of finding an extremal surface with area A in a quantum gravity state with boundary conditions determined by the operator B. Since probabilities sum to unity, this would then require the right-hand-side of (4.1) to converge as desired. This idea has by now been investigated in a variety of contexts which appear to support this conclusion; see e.g. [30,31,36,[64][65][66][67][68]. For clarity, we formalize this assumption as follows: Assumption 5. We assume that, in the UV-completion of either JT gravity or Einstein-Hilbert gravity with minimally-coupled matter, the integral over fixed-area-saddles on the righthand-side of (4.1) converges and gives a good approximation to the left-hand-side in the semiclassical limit. Assumption 5 is now almost sufficient to allow us derive assumptions 1-4 for both JT and Einstein-Hilbert gravity. However, recall that -just as in the non-gravitational setting of section 3.1 -the cut-and-paste operations of section 3.2 can produce surfaces on which certain equations of motion do not hold, and in particular at which first derivatives of fields fail to be continuous (though such derivatives admit well-defined limits when approaching the surface from either side). As a result, we will need to further strengthen Assumption 5 by adding yet another assumption. We motivate this addition using an idea similar to the motivation for Assumption 5 itself. In particular, let us note that there is a set of diffeomorphism-invariant observables defined by the conformal geometry of a minimal surface anchored to particular cuts of the asymptotically AdS boundary. Furthermore, the same is true for the minimal surface that is anchored to both the fixed-area surface and to particular cuts of the asymptotically AdS boundary, and it is again true when the surface is minimal only within any given homotopy class. Similarly, as will be discussed further in the next paragraph, one would expect states to be orthogonal when they have distinct such conformal geometries. One therefore expects that one can assign a probability to each possible conformal geometry in this context, and that the full path integral is given by integrating over such conformal geometries in analogy with (4.1). (As we will see below, it will be convenient to take the slicing prescription of Assumption 2 to be defined by minimal surfaces.) Here we have restricted discussion of the metric on the minimal surface to conformal equivalence classes of geometries since requiring the surface to be minimal is a form of gaugefixing. After such gauge-fixing, the full induced metric will not form a set of commuting observables. Instead, one component of the induced metric becomes a function of the other coordinates and momenta by solving the Hamiltonian constraint. Since it is canonically conjugate to the trace of the extrinsic curvature (which has been fixed to zero), one solves this constraint for the conformal factor of the induced metric. The remaining conformal geometry on the minimal surface continues to define a set of commuting observables. The above comments then motivate the following assumption, which is a generalization of Assumption 1: Assumption 6. Let KM ,A be the class of spacetimes that i) satisfy asymptotic boundary conditions specified byM , ii) satisfy fixed-area boundary conditions at A, iii) have fields that are continuous everywhere, and iv) satisfy the conditions to be a fixed-area saddle everywhere except on a single codimension-1 minimal surface Σ anchored both to the fixed-area surface and to some cut of the boundary. As usual, we restrict to the case where the sources onM are real. In this case we assume that the fixed-area action on KM ,A is minimized by real saddles; i.e., every k ∈ KM ,A has action equal to or greater than that of some real saddle k s ∈ KM ,A of the fixed-area actionĨ A . Note that minimal surfaces in Euclidean signature are often well-defined even when the spacetime is not smooth. We assume that this is the case in the above context though, if needed, we could further elaborate on this definition by requiring the spacetime to be built from smooth spacetimes using cut-and-paste along minimal surfaces. Now, as discussed recently in both [69,70] in the context of JT gravty, there are various subtleties and possible choices involved in using minimal surfaces to construct observables (or, equivalently in the language of those references, to fully fix a gauge in the Euclidean path integral). Such subtleties may in the end require further refinements to assumption 6. But it is also plausible that such issues are not important at the level of our current discussion. We have thus formulated assumption 6 without taking such issues into account. Similarly, while minimal surfaces are smooth when the bulk spacetime dimension satisfies D ≤ 7 [71], they can be singular for D ≥ 8 [72]. This is another reason why a useful conjecture for D ≥ 8 could require further modification and/or additional work to describe a useful notion of the Einstein-Hilbert action on KM ,A . However, at least for 10 ≥ D ≥ 8 it turns out that such singularities are non-generic [73,74]. Aside: A new positive action conjecture We now make a small digression in order to gather further supporting evidence for Assumption 6. One may note that a particular consequence of Assumption 6 is a new positive action conjecture. We use the term conjecture in order to indicate both that it is not to be added to the existing list of assumptions needed to derive our trace inequality, and also that it may be more amenable to study in the near future than are the assumptions listed above. Conjecture 1. Let KM ,A be as in Assumption 6. Then the fixed-area actionĨ A is bounded below on KM ,A . As will be explained in section 5, Conjecture 1 is equivalent to what may naturally be called a positive-action conjecture for gravitational wavefunctions. While Conjecture 1 remains to be proven for general theories, we show in appendix A that the analogous holds for JT gravity with dilaton-free couplings to matter. In JT gravity, the value of the dilaton at a point plays the same role as the area of a codimension-2 surface in higher dimensional gravitational theories; see e.g [22,75]. Furthermore, in the same way that we might fix the area of a disconnected codimension-2 surface in higher dimensions, in JT we should allow the specification of a fixed-dilaton (fixed-ϕ) set, in the sense that we fix the sum ϕ total = i ϕ i where ϕ i are the dilaton values at each of the singular points. In discussing JT gravity we thus write KM ,ϕ total instead of KM ,A . The important point in the JT argument is that, since the bulk spacetime dimension is D = 2, the minimality of a codimension-1 surface Σ implies that its extrinsic curvature tensor vanishes. As a result, for any k ∈ KM ,A the spacetime metric is in fact C 1 (except at the fixed-dilaton conical singularity) and the Ricci scalar cannot contain codimension-1 delta-functions localized on Σ. Since R = −2 on each side of Σ, we then find R = −2 on Σ as well (again, except at the fixed-dilaton conical singularity). Furthermore, as reviewed in appendix A.3, if we ignore the conical singularities then the JT action on KM ,ϕ total would be given by the Schwarzian action. It would thus be bounded below by the analysis of appendix A.4. But it is also easy to include the contribution from the conical singularities. This is just I sing = −2π i ϕ i δ i , where δ i is the conical deficit at each singularity. Conical excesses are also allowed, but those are just deficits with δ i < 0. Since each deficit must satisfy δ i ≤ 2π we have the bound Combining this with the bound on the Schwarzian action and the positivity of the matter action then establishes Conjecture 1 in this context. Assumptions 5 & 6 are sufficient Having motivated Assumptions 5 and 6, we now turn to the issue of showing that -together with known results for JT and Einstein-Hilbert gravity -they imply Assumptions 1-4. As a result, they also imply the desired bulk dual (1.6) of the trace inequality. Let us first note that Assumptions 5 and 6 immediately imply Assumption 1, as they were designed to do. Furthermore, assumptions 3 and 4 were already shown to be true for JT and Einstein-Hilbert gravity by the discussion of section 3.2 (between (3.11) and (3.12)). This then leaves only Assumption 2. There are two parts to showing that this assumption holds. The first is to specify a cutting rule for Assumption 2 so that pasting the various pieces together yields a spacetime that satisfies the desired asymptotic boundary conditions and, in particular, for which the new spacetime has a sufficiently smooth asymptotic boundary for which the action is finite and for which the action also defines a good variational principle. As noted above, for either JT or Einstein-Hilbert gravity, it will be convenient to take the cuts to be minimal surfaces. For JT gravity, appendix A.1 then shows that the associated cut-andpaste construction preserves the boundary conditions of the variational principle (which were also shown to imply finiteness in appendix A.3). For Einstein-Hilbert gravity, this property is even easier to verify and is established in appendix B. The second task is then to establish the additivity property (3.9). This is again straightforward for both JT and Einstein-Hilbert gravity. The main point is that, since the boundaries we sew together are now taken to have K = 0, there can be no delta-function contribution to the Ricci scalar at the seam where the sewing occurs 7 . Furthermore, the matter action is additive for the reasons explained in section 3.1. It thus remains only to show additivity for the boundary terms at asymptotic boundaries. One such term is always the Gibbons-Hawking term, while the rest are boundary counter-terms. Due to the requirement that each operator have an appropriate "rim", the boundary metric and other analogous boundary conditions are manifestly smooth. Thus the boundary counter terms are integrals of smooth functions and their additivity is also manifest. Figure 10. Our cut-and-paste construction can join pieces of the asymptotic boundary together in a way that is not smooth. We illustrate this here for a two-dimensional example (e.g., as appropriate to JT gravity). In particular, when two pieces are sewn together, two corners (with, say, associated interior intersection angles π/2 + α 1 , π/2 + α 2 ) of the individual pieces can merge. When this occurs, the extrinsic curvature density √ hK of the resulting ∂M will contain a delta-function of strength The final term to consider is then the Gibbons-Hawking term. For a regulated version of the spacetime where the boundary has been moved inward to a finite value ϵ of the appropriate Fefferman-Graham coordinate (or of the defining function of the conformal frame in the language used for JT gravity in appendix A), the extrinsic curvature of the regulated boundary generally has a delta-function at the seam; see figure 10. But since the strength of this deltafunction is always determined by the angles at which the asymptotic boundary meets the seam, we can render this part of the action additive by simply adding an appropriate 'corner term' to the definition of the action for the cut space. This procedure is discussed in great detail for JT gravity in appendix A.2. The Einstein-Hilbert case is then discussed in appendix B. Appendix B in fact shows that, with the usual boundary conditions and with the choices we have made, in the AlAdS d Einstein-Hilbert case (with d ≥ 3) the above delta-function turns out to make no contribution to the action in the limit ϵ → 0. Thus the corner terms also vanish in the ϵ → 0 limit and are not strictly needed in the Einstein-Hilbert case. This concludes the argument for both JT and Einstein-Hilbert gravity that Assumptions 5 and 6 imply Assumptions 1-4, and thus also the desired result (1.6). 5 On a positive-action conjecture for quantum gravity wavefunctions As a slight aside from the main discussion, this section elaborates further on the status of Conjecture 1 in Einstein-Hilbert and other theories of gravity. We will first describe how it is equivalent to what is naturally called a positive-action conjecture for quantum gravity wavefunctions. We then point out that, at least when the the surface Σ is a slice of a foliation of the bulk spacetime that is smooth away from Σ and which also smoothly foliates a compact AlAdS boundary, the conjecture is implied by the requirement that the gravitational Hamiltonian H is bounded-below. While the known asymptotically AdS positive energy theorems [76][77][78] are not sufficient to prove positivity of H at this level, the connection nevertheless provides additional physical reasons to believe that Conjecture 1 will hold. Recall that Conjecture 1 referred to bulk spacetimes M which have only asymptotic boundaries, but which may contain a surface Σ on which derivatives of fields are not continuous. Furthemore, away from Σ the Euclidean equations of motion are satisfied. As a result, if we cut the spacetime along Σ then each piece gives a smooth extremum of the standard Euclidean action so long as an appropriate boundary term is included at the cutting surface Σ and corresponding boundary conditions are imposed at Σ. In particular, even though M may have a conical singularity, the resulting pieces do not (though the boundaries of these pieces at Σ may contain 'corners' at which the extrinsic curvature of Σ contains delta-functions; see figure 11). For Einstein-Hilbert gravity, it is convenient to take this boundary term to be a Gibbons-Hawking term at Σ, and thus to think of the resulting action as defining a variational principle for the space of configurations defined by fixing the induced metric on Σ to its value in M. Note that such pieces are precisely the spacetimes that appear as saddles in Euclidean path integral computations of gravitational wavefunctions in the induced-metric representation, where for a (d + 1)-dimensional bulk, one thinks of the state as a functional Ψ(h (d) ) of the d-dimensional (Riemannian-signature) metric induced on a Cauchy surface. Furthermore, the induced metric h (d) on Σ that minimizes this action will correspond to the peak of that wavefunction, which one expects to be finite in the semiclassical limit. As a result, one could also motivate Assumption 1 from the belief that semiclassical Euclidean path integral calculations of such wavefunctions should give sensible answers. In fact, we can also drop the requirement that Σ be minimal. The reason for this is explained in detail for JT gravity in appendix A.3, though it holds equally well for general theories of gravity. As described there, if one thinks of each piece M 1 , M 2 as being part of a larger saddle that extends beyond Σ, then the Hamiltonian constraint requires the on-shell action to be invariant under continuous deformations of Σ that do not move the anchor set ∂Σ on the asymptotic boundary. More specifically, the previous statement is true so long as the action includes sufficient 'corner terms' so that it defines a good variational principle under boundary conditions for Σ consistent with the desired deformation. It thus follows that Conjecture (1) is in fact equivalent to the following conjecture which, due to the above-mentioned connection with Ψ(h (d) ), we call the positive-action conjecture for quantum gravity wavefunctions: Conjecture 2. Consider the space of smooth Euclidean spacetimes having both an Asymptotically locally AdS (AlAdS) boundary (associated with some cosmological constant Λ < 0) and an additional finite-distance boundary at some surface Σ. In the usual way, we use a Fefferman-Graham expansion to fix a 'boundary metric' at the AlAdS boundary. We also impose some class of boundary conditions at the 'corners' where Σ meets the AlAdS boundary. We require that the boundary conditions allow Σ to be deformed to a minimal surface. Note, however, that we impose no boundary conditions on Σ. Let us now further restrict to such spacetimes that solve the vacuum Euclidean Einstein equations with cosmological constant Λ. On such solutions we consider the Euclidean Einstein-Hilbert action with cosmological constant Λ, together with the standard Gibbons-Hawking term on all smooth parts of the boundary, the standard boundary counter-terms on the AlAdS boundary, and 'corner terms' appropriate to the above-chosen boundary conditions at the corners. For fixed such boundary conditions we require the above action functional is bounded below. Here the qualification that the corner boundary conditions must allow Σ to be deformed to a minimal surface is needed in order to preserve equivalence with Conjecture 1, but is not obviously critical for the existence of a lower bound. We also emphasize that we have not fixed a particular induced metric on Σ, so that our lower bound is required to be independent of that induced metric. Furthermore, if Conjecture 2 holds then one can clearly also couple the system to positive-action matter with similar results. This conjecture generalizes Hawking's original positive action conjecture [21] from asymptotically flat to AlAdS spacetimes, and also by introducing the finite boundary Σ (appropriate to thinking of the spacetime as a Euclidean saddle for Ψ(h (d) ) instead of a partition function). Such generalizations are natural in the spirit of the original conjecture. The above conjecture is also weaker than that of [21] in the sense that we require only that each set of boundary conditions lead to a lower bound, but we allow this lower bound to depend on the choice of boundary conditions and, in particular, we allow the possibility that for some boundary conditions the greatest lower bound is less than zero. The positive action conjecture for asymptotically Euclidean spacetimes was proven by realizing that the Euclidean action in that context is equal to the Hamiltonian for a higherdimensional Lorentzian-signature theory of gravity evaluated on a Riemannian-signature Cauchy surface [79]. This trick fails in the asymptotically AdS context, so a new proof strategy is needed. While it is unclear to us how to give a complete proof in general, there is a simple context in which the conjecture follows from having a lower bound for the gravitational Hamiltonian. To see the connection, consider a bulk spacetime M subject to boundary conditions as stated in the conjecture, and suppose that M admits a smooth foliation such that Σ = Σ 1 ∪ Σ 2 with Σ 1 diffeomorphic to Σ 2 and with both Σ 1 , Σ 2 being limiting cases of the slices in the foliation. We will refer to Σ 1 as the time t 1 and to Σ 2 as the time t 2 with slices in the foliation labeled by t ∈ (t 1 , t 2 ). Consider then the action I [t 1 ,t] of for the region defined by slices with t ∈ [t 1 , t]. Clearly the zero-volume region [t 1 , t 1 ] has I [t 1 ,t 1 ] = 0. Furthermore, the usual Hamilton-Jacobi argument gives ∂ t 2 I [t 1 ,t 2 ] = H(t 2 ), where H(t 2 ) is the standard (time-dependent) gravitational Hamiltonian defined by the boundary conditions on the asymptotic boundary and evaluated on the initial data defined by the surface Σ t 2 . As a result, if the boundary Hamiltonian H(t) has a t-independent lower bound E 0 , then the action will satisfy For any given t it is natural to believe the corresponding H(t) to be bounded below so, since we consider a case in which the range of t is compact, it is also natural to expect this lower bound to be uniform 8 . However, there are many situations which are not of the above form. Consider, for example, Euclidean AdS 3 in the conformal frame where the boundary metric is that of S 2 . Slicing the S 2 along surfaces of constant polar angle θ, the boundary anchor sets ∂Σ t are then circles of time-dependent size that pinch off at the poles. Furthermore, due to the Casimir energy of AdS 3 [81,82], the lower bound on the Hamiltonian diverges as the size of the circle shrinks to zero. While spherical AdS 3 gives an example where H(t) has no uniform bound, it is nevertheless a context where the total action is finite and, moreover, where one very much expects the given spacetime to minimize the action. We are therefore hopeful that further study of this example may suggest how the above sketch of a proof might be improved to deal with more general contexts. We may also hope to learn to deal with the loci where the bulk topology forces the above foliations break down. However, we leave such investigations for future work. Discussion The above work discussed the trace inequality Our goal was to understand the status of the above inequality on the bulk side of the AdS/CFT duality. In particular, we studied the conjectured inequality where ζ(M ) denotes the gravitational path integral with boundary conditions given by the closed boundary source-manifoldM and ⊔ denotes disjoint union. HereM bc † cb † is a smooth closed manifold specifying boundary conditions for our bulk theory on a Euclidean Asymptotically locally Anti-de Sitter boundary that can be broken into four pieces Furthermore, we require that connecting M b and M b † gives a new smooth closed manifold M bb † that is invariant under a reflection-symmetry that exchanges the b and b † pieces (and which complex-conjugates any complex boundary conditions) and similarly for M cc † . At the level of the semiclassical expansion, and when the operators B, C define Eucldean bulk path integrals with connected boundaries, we argued that the natural bulk dual of (6.3) was in fact satisfied to all orders in the semiclassical expansion in two important contexts. The first is the case of JT gravity with a dilaton-free coupling to two-derivative matter, with the possible further addition of perturbative higher derivative terms. The second is given by Einstein-Hilbert gravity minimally coupled to two-derivative matter, where again higher derivative terms can also be included perturbatively. In all cases we assumed the bulk path integral defined byM bc † cb † (dual to ⟨Tr D (BC)⟩) to be dominated by a single bulk saddle. When several bulk saddles are equally dominant, formally non-perturbative effects associated with additional saddles and/or mixing between saddles can be more important than perturbative corrections and are subtle to analyze; see e.g. recent discussions in [28,29] for condensed matter analogues and in [5,[30][31][32]. We thus save further consideration of this case for future study. For pure JT gravity, much can be said using explicit calculations based on standard Euclidean saddles. In addition, the non-perturbative definition of the Euclidean path integral described by Saad, Shenker, and Stanford [20] can be used to give a general derivation of (6.3). For more general cases the Euclidean path integral is sufficiently poorly understood that we cannot use the term "proofs" to refer to our arguments. Instead, we proceeded by stating various assumptions that we argued were plausibly true in regimes where a Euclidean gravitational path integral emerges from a more UV-complete theory. In particular, we considered three rather distinct paradigms for such path integrals. One of these was the possible extension of the Saad-Shenker-Stanford approach mentioned above [20] to (some UV-completion of) JT gravity coupled to positive action matter. Another paradigm involved a nonlinear generalization of the Gibbons-Hawking-Perry contour rotation prescription [21]. The third was the paradigm described in [36] that took a real-time formulation as fundamental but then described a procedure for transforming semiclassical computations into what were often sums over Euclidean saddles. We argued that all three paradigms lead to a bulk version of (6.1) in the semiclassical context described above, though our arguments were based on a set of assumptions about the semiclassical limit of a supposed UV-complete theory of quantum gravity. A significant restriction in our arguments was that we considered only boundary conditions which are real in Euclidean signature, and which thus cannot include Lorentzian components. There is a sense in which extending our analysis of general gravitational theories to complex boundary conditions would be trivial, since we need only to suitably extend the various assumptions we made along the way. This, however, would miss an important point that arises for both the non-gravitational path integrals studied in section 3.1 and the JT case analyzed semiclassically in section 2.1. In particular, once the sources are complex, the relevant saddles will generally not lie on the original contour of integration. In that context, the most direct analogue of the argument given here would attempt to show that cutting and pasting a valid complex saddle (through which the integration can be deformed to pass) for the left-hand-side of (6.3) yields a configuration k that lies on the steepest descent curve Γ ds through the dominant saddle for the right-hand-side (so that the dominant saddle then has lower action). Since it is not at all clear to us why that k should lie on the relevant Γ sd , we have not attempted to formulate a gravitational argument in this language. Instead, we leave further consideration of complex sources for future work. While it may be difficult to verify our assumptions about the Euclidean path integral, such assumptions imply other properties of the classical Euclidean action that are more amenable to study in the near future, and which in particular might be investigated numerically. The most tangible prediction is the positive action conjecture for gravitational wavefunctions described in section 5. This conjecture generalizes Hawking's original positive action conjecture [21] to the AdS context, and also generalizes it further by allowing spacetimes with an extra finitedistance boundary. However, we require only that the action be bounded below for each set of AlAdS bondary conditions, and not that the action be strictly non-negative. As some evidence in support of this conjecture, we were able to prove that the corresponding result holds in the simpler case of JT gravity (with general dilaton-free couplings to positive-action matter). Let us now return to the discussion of the bulk analogue (6.3) of the trace inequality (6.1) in a dual field theory. The main physical lesson from our investigations appears to be that this inequality is closely associated with positivity of entropy in the sense of having a positive density of states. To be more precise, we saw in various ways that the right-hand-side of (6.1) tends to be much larger than the left when the spectral densities of B and C are large. This is manifest from the CFT-side argument surrounding equation (1.4), as well as from the thermodynamic discussion in section 2.1. However, a corresponding feature also appeared in our gravitational arguments where in the Einstein-Hilbert case we found the left-hand-side to be suppressed relative to the right by a factor of e −A/4G associated with the area of an extremal surface. Similarly, in the JT context with the action (2.19) we found a similar suppression by e −4πϕ 0 . These expressions are readily recognized as being associated with the RT/HRT entropy [83][84][85] of a boundary region. We thus see that, even if the trace inequality were violated in contexts where such entropies are small, the violations would disappear in the limit where these entropies are large. In other words, we see that the large entropy regime is a relatively poor probe of whether a gravitational theory can admit a non-gravitational dual. While the trace inequality (6.1) is a very weak constraint in the context familiar quantum mechanical operators, we noted in the introduction that it can have fundamental implications. For example, it is deeply associated with the fact that the algebra of bounded operators on a Hilbert space is a type I von Neumann algebra. We will return both to this connection and to the fundamental status of (6.1) in bulk gravitational theories in a forthcoming work [86]. A Properties of the JT action This appendix discusses a number of details regarding asymptotically AdS 2 Jackiw-Teitelboim gravity. After defining the theory by stating the action and boundary conditions in section A.1, additivity of the JT action in the sense of (3.9) is shown in section A.2. The relation to the Schwarzian action is then reviewed in section A.3, and the Schwarzian form is shown to be bounded below in section A.4. A.1 Action and Boundary conditions for JT gravity Much of the later analysis in this appendix will involve study of the boundary conditions for asymptotically-AdS 2 JT gravity. The purpose of this section is to describe such boundary conditions in detail. We consider here the pure JT gravity theory consisting of only a dilaton ϕ and a metric g on a 2d spacetime M, without additional matter fields. While our boundary conditions are just those of e.g. [20] (which are the Euclidean versions of those of [22]), we take this opportunity to rewrite them in a form more similar to that commonly used to describe asymptotically locally Anti-de Sitter spacetimes in higher dimensional theories of gravity. For use with our cut-and-paste constructions, we also allow a slight extension of the usual boundary conditions to in which certain fields can be non-smooth on a codimension-1 surface. Below, we set the AdS 2 scale ℓ to 1. The first step in defining configurations of our theory is to consider 2-dimensional compact manifolds M with boundaries. These are perhaps most simply defined as the spaces obtained from S 2 by i) removing n open disks, which creates n circular boundaries that we label i = 1, . . . , n, ii) choosing g < n/2 and then for i ≤ g identifying the ith circular boundary with the (i + g)th circular boundary, iii) perhaps filling in one of the remaining circular boundaries with a cross-cap in order to obtain the non-orientable cases. Examples are shown in the left panel of figure 12. However, in order to accommodate the cut-and-paste constructions described in the main text, we also consider certain 2d manifolds with corners at their boundaries. In practice, it will be sufficient to define these by starting with one of the above 2d manifolds M with boundary (and without corners), choosing any smooth 1d surface Σ in that manifold that divides M into two parts, and removing one of the parts. We call what is left a manifold M with boundaries and corners; see the right panel of figure 12. The boundary of the final M now consists of two types of segments. The first type, whose union we we call the asymptotic boundary ∂ as M, consists of those segments in ∂M which also lie on the boundary of the parent space M from which M was cut: ∂ as M = ∂M ∩ ∂ M. The second type, whose union we call the finite boundary ∂ f M, consists of the remainder which we see must in fact form the slicing surface Σ (so that ∂ f M = Σ). In general, ∂ as M and ∂ f M will intersect at a finite number of points. We wish to consider metrics g on M which are the restriction of metricsĝ on the parent space M and which in some region near ∂ as M can be written in the form where both the function h and the coordinates z, θ are smooth on M, where z has a first-order zero at ∂ as M, and where both ∂ z h and ∂ 2 z h vanish at z = 0. Similarly, we consider dilaton fields of the form with f smooth on M. Note that, for a given metric and dilaton, the form of (A.1) and (A.2) are preserved by appropriate smooth coordinate transformations which satisfy and under which we find that the associated h on the boundary transforms as h| z=0 → (h[ 1 a db dθ ] 2 )| z=0 . As a result, the coordinates (z, θ) that give the form (A.1) and (A.2) are far from unique. We also wish to impose further boundary conditions. To do so, we endow each of our manifolds with a special preferred scalar function Ω which will be used to translate between the physical metric and dilaton (which diverge at ∂ as M) and an unphysical rescaled metric and dilaton that will be used to specify the additional boundary conditions. It is convenient to define Ω to be a function on the parent space M such that Ω vanishes on ∂ M = ∂ as M but has dΩ nowhere vanishing on ∂ M = ∂ as M. We require Ω to be smooth on most of the spacetime, though -for use with our cut-and-paste constructions -we also allow the existence of a finite number of smooth codimension-1 surfaces Σ on which Ω is continuous and limits of first derivatives from either side are well-defined, but where dΩ can have discontinuities across the surface. The various such surfaces Σ are allowed to intersect at a finite number of points. We will refer to Ω as the defining function of the conformal frame, or simply as the conformal factor. In terms of any given set of coordinates (z, θ) satisfying the conditions above, we may write Ω = zω(z, θ) (A.5) for some smooth positive function ω that does not vanish anywhere on ∂ M = ∂ as M. We then use Ω to define rescaled (unphysical) fields We may thus introduce a coordinate u on each connected component of ∂ as M such that u measures the unphysical proper distance defined by ds 2 , and we may then requireφ on ∂ as M to be some fixed function ϕ b (u); i.e., we imposẽ In doing so, we must also fix the range of u and thus the total (unphysical) length of the boundary. Indeed, the second statement in (A.7) is meaningful only if we also label each boundary point once-and-for-all with a value of u as part of our boundary conditions. We note for future reference that (A.7) implies This completes our discussion of (asymptotic) boundary conditions for JT gravity. Note that, since we allowed discontinuities in dΩ, these boundary conditions are manifestly invariant under the cut-and-paste construction associated with Assumption 2 of section 3.2. A.2 Additivity and the JT action Having stated the dilaton and metric boundary conditions above, we can now proceed to discuss the JT action. Here we focus on the additivity property (3.9). We now include possible (dilaton-free) couplings to matter, meaning that the dilaton should not appear in the matter action. However, we will not spell out the details of the matter action or boundary conditions. We will simply (and implicitly) assume that the matter action for a given metric is of the form required for non-gravitational systems in section 3.1 (so that the matter action is separately additive), and that the matter fields fall-off sufficiently quickly at infinity that they do not affect the leading behavior of the dilaton given in (A.7) even when the equations of motion are satisfied. Since the metric and dilaton diverge at the asymptotic boundaries z = 0, the JT action will be defined as the ϵ → 0 limit of actions for regulated manifolds M ϵ , each given by the region of M with Ω ≥ ϵ for some ϵ > 0. The boundary ∂M ϵ of the regulated spacetime can again be decomposed into two parts, ∂ f M ϵ and ∂ as M ϵ , the first of which is just the part of ∂ f M that remains in M ϵ , and the second is the closure of the remainder: Again, the two parts generally intersect in a finite number of points, though the intersections are no longer strictly orthogonal at finite ϵ. For a given intersection point i, we thus let π/2 + α i > 0 denote the (interior) angle at which the finite and asymptotic boundaries meet; see figure 13. We now take the action to be I := lim ϵ→0 I ϵ with Figure 13. A regulated manifold M ϵ is shown with its finite boundary ∂ f M ϵ and asymptotic boundary ∂ as M ϵ intersecting at angles π/2 + α 1 and π/2 + α 2 . Here ϕ 0 is a constant, h is the induced metric on a boundary, K is the extrinsic curvature (a scalar, since the boundary is one-dimensional) defined by the outward-pointing normal, i ranges over all points where ∂ f M ϵ meets ∂ as M ϵ , and ϕ i are the values of ϕ at such meeting points. Note that, in the first line, we have included separate terms at ∂ as M ϵ and ∂ f M ϵ , neither of which will include effects from corners where they intersect. The natural deltafunction contributions from K at corners have instead been written explicitly in terms of the α i (up a π/2 offset for each corner that we now discuss). The usual calculation then shows (A.11) to be a good variational principle when the induced metric and dilaton are fixed on the finite boundaries ∂ f M and the boundary conditions of (A.1) are imposed at the asymptotic boundary, and of course when boundary conditions appropriate to I matter are imposed on matter fields. In particular, while the fact that we allowed dΩ to be discontinuous across a surface introduces delta-functions in the extrinsic curvature of M ϵ at some values of θ, these delta-functions give finite results when integrated over θ. The discontinuities in dΩ then have no further impact on the computation. In particular, they do not change any powers of ϵ. Since we require the matter action to be dilaton-free, the equation of motion obtained by varying ϕ in (A.11) is just R + 2 = 0 for any allowed matter. In particular, the matter field can thus have no effect on the asymptotics of the metric. This also means that the only positivity property of the matter that we will need is that I matter be bounded below (say, by zero) for any asymptotically AdS 2 constant curvature R = −2 Euclidean metric g. We now make a number of comments about the action (A.11), in particular regarding its additivity properties. It is convenient to begin by discussing the first line in (A.11), which turns out to purely topological. Let us denote these terms by I top . Since the interior angles at each intersection point i are π/2 + α i , the Gauss-Bonnet theorem requires where χ is the Euler character of M ϵ and n int is the number of points where ∂ f M ϵ and ∂ as M ϵ intersect. The ϵ-dependence of I top is manifestly trivial, so we will drop ϵ labels when discussing it below. Furthermore, given disjoint configurations M 1 and M 2 , we see that I top satisfies Here we use M to denote both the underlying manifold with boundaries and corners and the fields carried by that manifold. We will continue this abuse of notation below. The symbol ⊔ denotes disjoint union. The term I top also satisfies a second more interesting identity. To describe this identity, consider a configuration M for which ∂ f M has n f connected components ∂ f,j M for j = 1, . . . n f . We wish to form a new configuration M by identifying pairs of components ∂ f,j M. In particular, for some m f < n f /2, suppose we have (surjective) diffeomorphisms η j : . . m f that preserve both the induced metric 9 and the conformal factor Ω. We define M by using each η j to identify its domain with its range, so that Similarly, identifying pairs of circles is well-known to leave the Euler character unchanged. However, identifying a pair of line segments lowers χ by 1; see again figure 14. 9 Since the conformal factor is also preserved, it does not matter whether we state this definition in terms of g org. We also note that we have excluded the possibility of identifying some component of ∂ f,j M with itself. When the component is an S 1 , nontrivial such identifications do exist that yield smooth results, and our analysis below could be generalized to include them, but we will have no need of them. Because it satisfies (A.13) and (A.19), we say that I top is sewing-additive. This in particular means that it satisfies (3.9). We will use the same term below for any other functional satisfying analogous identities. (Note that this as yet says nothing about the operations associated with the yet-to-be-discussed monotonicity relations (3.11) and (3.12).) In fact, the entire regulated Euclidean action I ϵ defined by (A.11) is sewing-additive. It is already manifest that I ϵ satisfies There are now two effects to consider in order to show I ϵ (M) = I ϵ (M). The first is that, as in the discussion of I top above, intersections between the finite and asymptotic boundaries can disappear in pairs, so that there are contributions α i ϕ i to I ϵ (M) that do not appear in I ϵ (M). However, such a disappearance is associated with the joining of two asymptotic boundary segments as shown in figure 10 from section 4.2. The result is generally not smooth, so that the extrinsic curvature density √ hK of ∂ as M contains a new delta-function of a strength defined by the angles α i associated with the disappearing pair of intersections. In fact, for disappearing interior intersection angles α 1 , α 2 , the delta-function is of strength α 1 + α 2 . This relation can be derived from the Gauss-Bonnet theorem. In particular, √ hK remains smooth when both of the disappearing intersections are orthogonal (α 1 = α 2 = 0). See again figure 10 in section 4.2. The second effect is that, when two boundaries are sewn together, the seam is smooth only when the extrinsic curvatures K match appropriately on the two surfaces. More generally, the sewing leads to a singularity on the seam which gives a delta-function in √ gR related to the discontinuity in extrinsic curvatures. This phenomenon is well-known from the Israel junction conditions; see e.g. [87]. The particular relation again follows by applying the Gauss-Bonnet theorem to a disk of infinitesimal size (and perhaps strong curvature) bounded by the two surfaces to be sewn together and then taking the limit where the surfaces coincide. In this way one sees that the contribution to I ϵ (M) from the delta-function in R on M precisely compensates for the fact that In particular, we see that the regulated action I ϵ is already sewing-additive at finite ϵ. Taking the limit ϵ → 0 then shows that corresponding property again holds for the unregulated action I. A.3 Relation to the Schwarzian Action As shown in [22] (see [20] for a Euclidean-signature treatment), the on-shell action for pure JT gravity takes a so-called Schwarzian form, which has proved to be extremely useful. We very briefly review this below, though most of the present section is a slight aside that generalizes the above result to our class of manifolds-with-boundaries-and-corners M. The extension is not of critical use in the main text, but may be enlightening to some readers. We also comment briefly on the off-shell extension of this result. The only equation of motion used to derive the Schwarzian action below is R + 2 = 0. Since the first line of (A.11) is topological, deviations from the on-shell result are controlled by the term involving √ gϕ(R + 2). In the usual way (see e.g. [88] for a review) since we took the asymptotically locally AdS 2 boundary conditions to require both ∂ z h and ∂ 2 z h vanish at z = 0, we find on q i requires any additional boundary term on ∂ f M to be independent of the momenta p i . Furthermore, any such boundary terms can then be absorbed into the bulk by an appropriate redefinition of the momenta p i , leaving us with an action of the form (A.26) as claimed. The usual Hamilton-Jacobi argument then shows that variations of I with respect to infinitesimal changes in the location of the surface Σ (used to cut M from M) must involve two contributions. Here we restrict attention to variations that preserve the boundary conditions stated above for I, and which also preserve the points where Σ intersects ∂ as M. The first contribution is given by varying the location of Σ while holding each q i fixed on Σ, and the second is given by leaving Σ fixed within the manifold by varying that values of q i on Σ as dictated by the appropriate evolution under the equations of motion (in accord with the would-be motion of Σ through M). So long as the intersection of Σ with ∂ as M does not change, combining the two terms gives a result in which contributions of the p iq i term cancel completely. The remainder is simply linear in the constraints H ⊥ and H || . But the constraints vanish since M is on-shell, and we see that the desired variation vanishes as well. In other words, the action of M is in fact invariant under smooth deformations of the surface Σ used to slice it from an on-shell M, so long as the deformations both leave fixed the intersections of Σ with the asymptotic boundary ∂ as M and respect the boundary conditions associated with the action I. In particular, this requires that the α i remain of whatever order in ϵ is specified by the boundary conditions. This result then allows us to evaluate the action I(M) by choosing any convenient surface Σ related to ∂ f M via smooth boundary-condition preserving deformations within M. One choice we can always make is to minimize the physical length of Σ (after subtracting the appropriate universal divergence from the region near the boundary). In two Euclidean dimensions, doing so necessarily results in a smooth surface with vanishing extrinsic curvature [71]; i.e., in a geodesic. Having thus set K = 0 on Σ, inspection of (A.11) shows that all boundary terms on the interior of ∂ f M now vanish. The only remaining contributions to I from Σ are then those associated with the angles α i . These are straightforward to compute using the well-known fact that geodesics in spacetimes of the form given by (A.1) are asymptotically of the form θ = θ 0 + θ 2 z 2 + . . . , while the proper distance s along the geodesic is s = ln z + O(z 2 ). This fact follows from using (A.1) and expanding the geodesic equation in powers of z (say, by taking z as a non-affine parameter along the geodesic). Note that the above expansion shows that taking K = 0 on ∂ f M is consistent with (A.9). At Ω = ϵ the unit-normalized tangent to the inward-directed geodesic is of the form On the other hand, defining ω 0 (θ) = ω| z=0 and noting that Ω = zω 0 + O(z 2 ), we see that at Ω = ϵ the regulated version of the asymptotic boundary ∂ as M has (in the direction of increasing θ) the unit-normalized tangent Since we chose the tangent along the asymptotic boundary to be in the direction of increasing θ, combining these with (A.1) yields where the + (−) sign corresponds to an intersection point at the large-θ (small-θ) end of an asymptotic boundary segment; see figure 15. Applying (A.8) with h = 1 then yields and where now the (−) sign is correct for an intersection point at the large-θ end of an asymptotic boundary segment and the (+) sign holds at the small-θ end. As a result, we find where u i , θ 2,i are the u-value and θ 2 -value of the ith intersection point between the finite and asymptotic boundaries. The natural boundary term in (A.25) has been cancelled by the ϕ i α i contributions, but a new boundary term involving θ 2 remains. This is related to the fact that, as discussed in the main text, sewing together two boundaries may not result in an asymptotic boundary that is smooth at finite ϵ. In other words, the sewn-together manifold may not be associated with a smooth conformal factor Ω. We have seen, however, that the extension to conformal factors that allow √ hK to contain delta-functions causes no significant issues. A.4 Positivity of the Schwarzian action We now establish the positivity result needed in the main text. In particular, we consider manifolds M satisfying the above boundary conditions and which have only asymptotic boundaries (i.e., ∂ f M = ∅.). Positivity of α, ϕ and the above-noted fact that we can freely choose any finite boundaries to be geodesics then immediately also imply a similar lower bound for the case of non-empty ∂ f M. As noted previously, we require I matter to be minimally-coupled to the metric g. In particular, the dilaton ϕ should not appear in I matter . We also require that I matter be bounded below by zero when the metric g satisfies R + 2 = 0 and is locally asymptotically AdS 2 . Since the topological term is minimized by taking the spacetime to be a disconnected union of disks, for fixed boundary conditions the full action will be bounded below if we can derive a lower bound for the Schwarzian action (A.25) for each disk; i.e., on each circular boundary. This is certainly to be expected since the Schwarzian action arises [90] (see also [91]) as an effective description of the low energy limit of (a limit of) the Sachdev-Ye-Kitaev model (first introduced in [92]), which is a standard quantum mechanical system. However, it is reassuring to see a direct argument 10 . For general ϕ b (u), we would like to simplify (A.25) by defining a new coordinateũ such that We can think of thisû as being associated with a different choice of conformal frame defined byΩ :=φ b ϕ b Ω, in which we see that the new boundary dilaton profile would be given byφ b . Since we have required that Ω be specified as part of the definition of the system, it would not be correct to say that the coordinate transformation (A.33) actually changes the boundary values of ϕ, but it nevertheless allows us to rewrite any function of the original ϕ b boundary conditions in terms of another JT system with boundary conditions specified byφ b . The Schwarzian action correspondingly becomes (A.34) 10 It seems likely that this result is already somewhere in the vast literature concerning the Schwarzian action. The authors would be happy to receive references to earlier published versions of this result. The last term becomes manifestly non-negative after integrating by parts. Thus we only need to show that the first term is bounded below for any convenientφ b (u) (and with the period of u dictated by this choice via (A.33) andwith any convenient the period ofû). For simplicity, we chooseφ b (u) to be a constant (which we again callφ b ). We also choose the period ofû to be 2π, which then sets the value ofφ b for a given ϕ b (u). To clean up the notation, we will henceforth writeû as simply u so that the simplified action (dropping the final term in (A.34)) becomes Since we consider only circular boundaries, we can ignore the total derivative that gives the final term of (A. 35). It is also convenient to write η(u) = θ ′ (u), so thatĪ Schwarz becomes (A.36) The expression (A.36) is to be evaluated on functions η that satisfy an important constraint, since θ must increase by 2π when u increases by 2π. In other words, we require We can take this constraint into account by adding a Lagrange multiplier to the action: There is an obvious saddle of this action at η = 1, λ = 2. Perturbations around this saddle point may be studied by writing η = 1 + Υ. Expanding (A.38) to quadratic order yields ∆Ī Schwarz = − duφ b Υ 2 − (Υ ′ ) 2 = − duφ b Υ 2 + ΥΥ ′′ + bdy terms. (A.39) The eigenfunctions of the operator ∂ 2 u + 1 are Υ = e iku , k ∈ Z, (A. 40) with eigenvalues −k 2 + 1. The k = 0 mode thus has negative action, but it is forbidden by the constraint The modes k = ±1 have vanishing action, and they turn out to correspond to the SL(2,R) symmetries of the Schwarzian action. These are in fact gauge symmetries of JT gravity, though they appear as global symmetries ofĪ Schwarz due to the partial gauge-fixing used in derivingĪ Schwarz [22]. Other modes all have positive quadratic action ∆Ī Shcw . The above analysis shows η = 1 to be a local minimum of the action over the space of allowed configurations. Importantly (and as we will see explicitly below), the same must be true for all of the saddles that can be reached by following the flat directions associated with the SL(2,R) symmetries. These of course share the same minimum value ofĪ Schwarz . However, we wish to show that this value is in fact a global minimum. One way to proceed is to note that (since we have fixed the winding number) the space of configurations is connected. As a result, if any configuration has lower action than η = 1, there is a path through configuration space that connects it to η = 1. Furthermore, starting at the η = 1 end of the path, the analysis above shows that the action must first increase before it can decrease. This is the case even if the path starts by following some path in the space Z of saddles related to η = 1 by SL(2,R) zero modes, as it must clearly leave the space Z at some point and since we have shown that all paths leaving Z must first increase the action before the action can decrease. As a result, along any given such path p there must be a point P 0 at which the action reaches a local maximum µ(p). Let us now attempt to minimize µ(p) over all paths p. So long there are no directions in which µ(p) remains finite at the edge of space of allowed configurations (i.e., when η diverges at some u), then the minimal µ(p) must in fact be a saddle s for the actionĪ Schwarz which satisfiesĪ Schwarz (s) >Ī Schwarz (η = 1); i.e., it must be a new saddle for the Schwarzian action. While we will not exclude the runaway possibility with complete rigor, if the divergence in η admits any kind of asymptotic expansion at large λ it would certainly require the first two terms to cancel against each other at leading order. But such cancelation requires η = ±η ′ /η + . . . , (A.42) where . . . represents lower order terms. Solving (A.42) yields ±η = 1 u+∆ where ∆ is a function whose derivative ∆ ′ vanishes vanishes as η → ∞. Without loss of generality we can take the point at which η diverges to be u = 0, in which case ∆ is approximately constant near u = 0. As a result, the integral η must diverge and the constraint (A.37) cannot be satisfied. Thus, if the action were to be unbounded below, there must be a new saddle point. However, we will now seek such new saddles directly and show that they do not exist. To do so, note that the action can be equivalently written as the action of a particle in a potential by defining χ = ln η to writē I Schwarz =φ b du (χ ′ ) 2 − (e 2χ − λe χ + λ) . (A.43) The solutions to saddle-point equations of motion can of course be labelled by the total energy E which we normalize as (χ ′ ) 2 + e 2χ − λe χ + λ = E. (A. 44) Note further that the potential V (χ) = e 2χ − λe χ + λ always approaches λ as u → −∞, but that it does so in different ways depending on the value of λ. For λ ≤ 0 the potential increases monotonically and all orbits are unbound. Here we use the term 'orbit' to refer to some χ(u) that solves the equation of motion obtained from (A.43) by varying χ, but which does not necessarily satisfy either the periodicity condition χ(u) = χ(u + 2π) or the constraint (A.37). In contrast, for λ > 0 the potential has a single critical point at χ = ln λ/2 and at which V = λ − λ 2 /4. This critical point is in fact a global minimum of the potential. Thus the case λ > 0 admits both unbound orbits (with E ≥ λ) and bound orbits (with E < λ). The potential remains finite in the asymptotic region of large negative χ, so the velocity in this region is finite. Since the unbound orbits all run to large negative χ, they can reach χ = −∞ only at infinite values of u. This means that there is no sense in which they can be periodic, and such orbits are not allowed. We can therefore focus on the bound orbits. Let us first consider the special cases that sit at the global minimum for all time. Such solutions are clearly periodic with any period. However, they satisfy the constraint (A.37) only for χ = 0, which then requires λ = 2, E = 1. For future reference we note that E − λ = −1. Finally, we can investigate the constraints for the other bound states. The first constraint is that χ is periodic with period 2π, which again requires E − λ = −1. Here the second step in (A.45) used equation (A.44) and the quantities η ± are defined by first defining χ ± to be the two roots of the denominator of the integrand and then setting η ± = e χ ± = λ± √ λ 2 +4E−4λ 2 . The final answer on the right-handside was obtained by noting that the integral over η can be expressed as a contour integral around a branch cut from η − to η + . Since the contour integral for large η vanishes, the final answer is given by the residue of the pole at η = 0. The second constraint turns out to be trivially satisfied for any E and λ since we find Again, in the final step the integration is performed using complex contour integration techniques. Thus we see that all saddles have E − λ = −1, but that there is a saddle for each λ ∈ R (or, equivalently, for each real E). The original η = 1 saddle lies in this family with λ = 2, which corresponds to the case where η + = η − . All of these solutions turn out to have the same action, consistent with the earlier statement that there is a family of solutions related by an SL(2,R) symmetry. To see this, note that the Schwarzian action is given bȳ where we still need to set E −λ = −1. This integral can once again be tackled by the methods of complex analysis. The integrand has poles at η = 0, ∞, so this integral can be evaluated using their residues to findĪ which is a constant independent of λ. Note that this corresponds to the action in section 2 with β = 2π. Of course, the SL(2,R) symmetry of [22] relates each saddle on Z to the η = 1 solution and, in particular, shows that each such saddle is again a local minimum up to the SL(2,R) zero mode. This observation completes the argument that (A.48) is in fact the minimum value ofĪ Schwarz , and thus that the Schwarzian action is bounded below. Allowing a general β, multiple S 1 boundaries labelled by j, and restoring the topological term would yield the general bound B Cut-and-paste Asymptotically locally AdS boundary conditions for the Einstein-Hilbert action with boundary counterterms Standard treatments of Asymptotically locally AdS (AlAdS) boundary conditions, and in particular standard discussions of boundary counterterms, typically assume that all structures should be smooth (see e.g. [93] for a review and references). However, as described in detail for JT gravity in appendix A, our cut-and-paste constructions generally lead to some lack of differentiability. The purpose of this section is thus to extend the standard boundary conditions to allow this behavior and to establish the associated properties of the action needed for section 4. The form of our cut-and-paste construction will make this straightforward. Since we work in Euclidean signature, the relevant conical singularities were already addressed thoroughly in [61]. Furthermore, it is readily apparent that they do not affect the asymptotic boundary conditions. In addition, as argued in section 4, the bulk terms in the action remain finite and well-defined under our cut-and-paste operations. We thus need only consider the effects of these operations on the asymptotic region of the spacetime. Since section 4 chooses to slice the smooth spacetimes along minimal surfaces, our study of the asymptotics will be facilitated by understanding the asymptotics of codimension-1 minimal surfaces Σ in smooth AlAdS spacetimes. We are interested in surfaces that are anchored on the asymptotic boundary, in the sense that ∂Σ is a smooth codimension-1 submanifold of ∂M. These anchor sets are always boundaries of the form ∂M a = ∂M b at which two source manifolds-with-boundaries M a , M b are sewn together to form some closed manifoldM ab . Furthermore, by the rim requirement of section 3.2, such boundaries always lie in cylinders C ϵ of the form discussed in section 3.1. It is thus convenient to define a Euclidean "time" coordinate t E on the AlAdS boundarỹ M such that t E is constant on each cut on which our extremal surface is to be anchored, and for which the (unphysical) AlAdS boundary metric has g (0) t E t E = 1. We make take any connected component of the anchor set to be of the form t E = t 0 . Consider in particular the part of the extremal surface near this anchor set. When the bulk AlAdS spacetime has dimension d + 1, we may introduce d − 1 coordinates x i on the t E = t 0 slice and use these (along with t E ) to construct a Fefferman-Graham coordinate system nearM . The codimension-1 minimal surface can of course be found by minimizing the volume functional where g Σ is the determinant of the induced metric on Σ, g Σ is the determinant of the metric induced on ∂Σ by the (unphysical) AlAdS boundary metric g (0) , and . . . denotes terms that are subleading as z → 0. The As a result, our cut-and-paste construction using minimal surfaces clearly defines spacetimes in which the usual Fefferman-Graham expansion holds up to possible corrections at order z d+1 relative to the leading terms. Since all divergences in the gravitational action are associated with terms that are at most of order z d−1 , an action defined using the standard boundary counter-terms will remain finite on such spacetimes. In particular, the traced extrinsic curvature K of a z = constant surface will contain a delta-function δ(t − t 0 ), but with a coefficient of order z d+1 . Since the volume element on the asymptotic boundary is only O(z −d ), this means that such a delta-function makes no contribution to the Gibbons-Hawking term at z = 0. Furthermore, since the boundary stress tensor T IJ bndy is associated with the term of order z d in the Fefferman-Graham expansion of the bulk metric, it remains well-defined as well. Here we use I, J to denote {t E , x i }. Thus by the usual computation we may write the variation of the action as where EOM terms denotes terms proportional to the usual bulk equations of motion. In particular, from this we see that the standard action continues to give a good variational principle for our cut-and-paste spacetimes. As a result, we are free to extend the domain of the usual action to include the above non-smooth spacetimes without further modification.
34,209
2023-09-05T00:00:00.000
[ "Mathematics" ]
Age, Sex and Overall Health, Measured As Frailty, Modify Myofilament Proteins in Hearts From Naturally Aging Mice We investigated effects of age, sex and frailty on contractions, calcium transients and myofilament proteins to determine if maladaptive changes associated with aging were sex-specific and modified by frailty. Ventricular myocytes and myofilaments were isolated from middle-aged (~12 mos) and older (~24 mos) mice. Frailty was assessed with a non-invasive frailty index. Calcium transients declined and slowed with age in both sexes, but contractions were largely unaffected. Actomyosin Mg-ATPase activity increased with age in females but not males; this could maintain contractions with smaller calcium transients in females. Phosphorylation of myosin-binding protein C (MyBP-C), desmin, tropomyosin and myosin light chain-1 (MLC-1) increased with age in males, but only MyBP-C and troponin-T increased in females. Enhanced phosphorylation of MyBP-C and MLC-1 could preserve contractions in aging. Interestingly, the age-related decline in Hill coefficients (r = −0.816; p = 0.002) and increase in phosphorylation of desmin (r = 0.735; p = 0.010), tropomyosin (r = 0.779; p = 0.005) and MLC-1 (r = 0.817; p = 0.022) were graded by the level of frailty in males but not females. In these ways, cardiac remodeling at cellular and subcellular levels is graded by overall health in aging males. Such changes may contribute to heart diseases in frail older males, whereas females may be resistant to these effects of frailty. www.nature.com/scientificreports www.nature.com/scientificreports/ associated with cardiac aging are modified by frailty, but little is known about the impact of age and frailty on the heart, especially in females. Frailty has been quantified clinically with many different instruments 16,17 . One common technique is to create a "frailty index", by dividing the number of health deficits in an individual by the total number of deficits considered to produce a score between 0 and 1, where higher scores denote greater frailty 18 . We have adapted this approach to quantify the degree of frailty in aging rodents 13,[19][20][21] . This provides a powerful new tool that can be used to explore the relationship between cardiac aging and overall health (frailty), in mice of both sexes. The goal of this study is to investigate the impact of age, sex and frailty on cardiac contractile function and explore underlying mechanisms that regulate contraction in a mouse model. Studies used isolated ventricular myocytes, Langendorff-perfused hearts and ventricular homogenates from middle-aged (~12 months) and older (~24 months) male and female C57Bl/6 mice. Frailty was evaluated in each animal with a frailty index tool that measures frailty as the accumulation of health deficits across many diverse systems, but not the cardiovascular system per se 22 . Results Calcium transients declined with age, but contractions were largely unaffected in both field-stimulated ventricular myocytes and intact hearts from male and female mice. Initial experiments determined whether ventricular myocyte contractions and the underlying calcium transients were affected by age and whether this differed between the sexes. Figure 1A shows representative contractions recorded from ventricular myocytes (paced at 4 Hz) isolated from the hearts of middle-aged (~12 mos) and older (~24 mos) male and female mice. The mean (± SEM) data show that contraction amplitudes were similar regardless of age or sex (Fig. 1B). The speed of shortening showed a modest increase with age in males, which was significantly different from older females (Fig. 1C). The velocity of lengthening was not affected by age but was slower in older females compared to older males (Fig. 1D). These results indicate that there are few age-or sex-dependent changes in contractions in field-stimulated ventricular myocytes between middle-age and later life. Figure 1E shows examples of calcium transients recorded from ventricular myocytes from the hearts of middle-aged and older mice. Mean data show that calcium transient amplitudes declined with age in both sexes (Fig. 1F). There also was a sex difference where, regardless of age, calcium transients were smaller in cells from females than males. We evaluated the impact of age and sex on calcium transient rates of rise and decay (Fig. 1G,H). The calcium transient rates of rise declined with age and this was significant for females, where rates of rise were slower than for males at both ages (Fig. 1G). Age was also associated with a dramatic slowing of calcium transient decay rate in both sexes (Fig. 1H). These observations show that aging was associated with an overall decrease in the magnitude and speed of calcium transients in both sexes, with few parallel changes in contractions. To investigate whether cardiac contractile function changed between 12 and 24 months of age, we also compared left ventricular developed pressure (LVDP) in Langendorff-perfused hearts from both sexes. Figure 2A shows representative examples of LVDP recorded from isolated perfused hearts from middle-aged and older male mice. Figure 2B illustrates mean data that show that peak LVDP was similar, regardless of age or sex. Likewise, the rates of pressure development (Fig. 2C) and decay (Fig. 2D) were similar in hearts from middle-aged and older mice of both sexes. Thus, even though aging was associated with smaller, slower calcium transients, contractile function was unaffected in intact hearts and isolated myocytes. To explore potential underlying mechanisms, we next investigated whether changes in the myofilaments occurred during the aging process. Actomyosin Mg-ATPase activity markedly increased with age in myofilaments from female but not male hearts. We investigated the relationship between activating calcium concentrations and actomyosin Mg-ATPase activity in myofilaments isolated from male and female ventricles at both ages. Results are shown in Fig. 3. When absolute actomyosin Mg-ATPase activity was plotted as a function of calcium in males, there was no difference between middle-aged and older hearts (Fig. 3A). By contrast, actomyosin Mg-ATPase activity increased with age in females, and this increase was statistically significant at physiologically relevant calcium levels above ~500 nM free calcium (Fig. 3B). We also compared maximal actomyosin Mg-ATPase activity between all four groups, as shown in Fig. 4A. The average maximal actomyosin Mg-ATPase activity increased with age in females but not males. Interestingly, activity was lowest in middle-aged females and was significantly lower than age-matched males (Fig. 4A). These data demonstrate that there was an increase in actomyosin Mg-ATPase activity with age across a wide range of activating calcium concentrations, although this effect was sex-specific and occurred only in hearts from females. When the actomyosin Mg-ATPase activity data were normalized to the maximum for each group, there were no significant age or sex effects (Supplementary Figure 1A,B). Consistent with this finding, the average EC 50 values (concentration of calcium required to produce 50% activation), did not differ between groups (Fig. 4B). The steepness of the actomyosin Mg-ATPase-calcium relationship is indicated by the Hill coefficient such that larger values denote increased cooperativity of calcium activation. To determine if there were age-or sex-related changes in the steepness of the actomyosin Mg-ATPase versus calcium curves, we compared mean Hill coefficients between groups. On average, Hill coefficients declined markedly with age in males, although there was heterogeneity and individual data points for the two age groups overlapped (Fig. 4C). By contrast, there was no age-dependent change in females, but Hill coefficients were significantly higher in older females when compared to older males (Fig. 4C). These findings indicate that cooperativity of the actomyosin Mg-ATPase-calcium relationship declined with age in male hearts. Overall health, measured with a frailty index, graded age-related changes in myofilament function in males but not females. On average, Hill coefficients declined with age in males (Fig. 4C) www.nature.com/scientificreports www.nature.com/scientificreports/ Figure 1. Peak calcium transients declined and slowed with age in C57BL/6 mice of both sexes, but contractions were largely unaffected. (A) Representative examples of contractions (cell shortening) recorded from field-stimulated (4 Hz) ventricular myocytes isolated from middle-aged (~12 mos) and older (~24 mos) male and female mice. (B) Mean data show that peak contractions were similar in all four groups. (C) The velocity of shortening increased slightly with age in males and was faster in older male cells compared to female cells. (D) The velocity of lengthening was unaffected by age but was lower in older females than in older males. (E) Representative examples of calcium transients recorded from myocytes from middle-aged and older mice of both sexes. (F) Mean data show that peak calcium transients declined with age in both sexes and were smaller in cells from females than males at both ages. (G) The rates of rise of the calcium transient declined with age and this was significant in females. The rates of rise were slower in females than males at all ages. (H) The rates of decay of the calcium transients declined markedly with age in both sexes. Values represent the mean ± SEM values in each case. Data were analyzed by two-way ANOVA with age and sex as main factors (post-hoc test was www.nature.com/scientificreports www.nature.com/scientificreports/ whereas maximal actomyosin Mg-ATPase activity increased with age in females (Fig. 4A). However, there was considerable heterogeneity, especially for the Hill coefficients, such that individual values from the two age groups overlapped in many cases. To determine the relationship between parameters derived from the actomyosin Mg-ATPase activity curves and frailty, we plotted maximal actomyosin Mg-ATPase activity, EC 50 values and Hill coefficients as a function of frailty index score (Fig. 5). We fitted each curve by linear regression. We found that there was no correlation between maximal actomyosin Mg-ATPase activity and frailty index scores in either males (Fig. 5A) or females (Fig. 5D). Similarly, EC 50 values were not correlated with the level of frailty in either sex (Fig. 5B,E). By contrast, Hill coefficients, which declined with age in males only, exhibited a strong negative association with frailty in males ( Fig. 5C) but showed no relationship with frailty in females (Fig. 5F). As both age and frailty were related to the decline in Hill coefficients in male hearts, we used multivariable regression to calculate the semi-partial correlations to assess the separate contributions of age and frailty to this relationship. With respect to the semi-partial correlations, we found that both frailty (r = −0.816) and age (r = −0.726) contributed significantly to the decline in Hill coefficients. This indicates that age-dependent changes in actomyosin Mg-ATPase activity in male hearts were graded by the overall health of the animal, as quantified with a frailty index score. Scatterplots demonstrate that LVDP, + dP/dt and −dP/dt were similar in hearts from middle-aged and older mice of both sexes. Data were analyzed by two-way ANOVA with age and sex as main factors (post-hoc test was Holm-Sidak). Samples were hearts from 11 middle-aged male mice, 14 older males, 15 middle-aged females and 11 older females, respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ Male and female hearts exhibited distinct age-associated changes in myofilament protein phosphorylation. As changes in the phosphorylation of key myofilament proteins could modify contractile function, we compared myofilament phosphorylation patterns in hearts from middle-aged and older mice of both sexes. Figure 6A-D shows gels of myofilament proteins for all samples evaluated in this study. The proteins were stained with Pro-Q (left) to visualize total phosphorylation levels and stained with Coomassie blue (right) to indicate total protein load. Actin was used as a loading control and the uncropped gels are shown in Supplementary Information Files 1-3. The mean data as well as scatterplots of the individual data points are shown in Fig. 7. Results show that, on average, myosin binding protein-C (MyBP-C) phosphorylation increased with age in both sexes (Fig. 7A). There was also a sex difference where MyBP-C phosphorylation was greater in middle-aged males than age-matched females. Mean phosphorylation levels for myosin light chain-1 (MLC-1), desmin and tropomyosin all increased with age in males but there were no age-associated changes in females ( Fig. 7B-D). In all cases, there was heterogeneity in data, especially for the older males, such that values from the two different age groups showed considerable overlap. There were also sex differences in the older group, where MLC-1 and desmin phosphorylation were higher in older males than older females (Fig. 7B,C). In contrast, troponin-T phosphorylation increased with age in females only and was higher in older females than in older males (Fig. 7E). Phosphorylation of troponin-I was not affected by age or sex (Fig. 7F). These experiments demonstrate that, on average, phosphorylation of MyBP-C, MLC-1, desmin and tropomyosin increased with age in males, but only MyBP-C and troponin-T phosphorylation increased with age in females. Figure 3. Maximal actomyosin Mg-ATPase activity increased with age in hearts from females but not males. (A) Actomyosin Mg-ATPase activity increased as calcium concentrations increased to the same extent in myofilaments from middle-aged and older male hearts. (B) In females, actomyosin Mg-ATPase activity increased with age at almost all calcium concentrations tested. Values represent the mean ± SEM values. Data were analyzed with a two-way repeated measures ANOVA, with age as the main factor, calcium concentration as the repeated measure and pairwise multiple post-hoc comparisons with a Tukey test. The * symbol indicates a significant effect of age. Values were significant for p < 0.05. Samples were hearts from 5 middle-aged male mice, 6 older males, 5 middle-aged females and 5 older females, respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ Maximal actomyosin Mg-ATPase activity and Hill coefficients varied with age in a sex-specific fashion, but EC 50 values were unaffected. (A) Mean data show that maximal actomyosin ATPase activity increased with age in females but not males. There was also a sex difference at younger ages where activity was higher in middle-aged males than in middle-aged females. (B) EC 50 values were similar in all four groups. (C) Average values for the Hill coefficients declined markedly with age in males and were significantly lower in older males compared to older females. Values denote the mean ± SEM in each case. Data were analyzed with a two-way ANOVA with age and sex as main factors (post-hoc test was Holm-Sidak). The * denotes p < 0.05. Samples were hearts from 5 middle-aged male mice, 6 older males, 5 middle-aged females and 5 older females, respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ Age-related changes in myofilament phosphorylation were graded by frailty index scores in males but not females. We found that phosphorylation of several key myofilament proteins increased with age, especially in hearts from males (Fig. 7). Still, there was considerable heterogeneity, especially in data from the older males where values from the two age groups exhibited substantial overlap. To determine the relationship between myofilament protein phosphorylation and frailty, we plotted phosphorylation levels as a function of frailty and fitted the curves by linear regression as shown in Fig. 8. Results showed that phosphorylation levels for MLC-1, desmin and tropomyosin exhibited strong positive correlations with frailty in hearts from males ( Fig. 8A-C). When we examined the semi-partial correlations, we found that both frailty (r = 0.817) www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ Figure 7. Total phosphorylation levels for MyBP-C, MLC-1, desmin & tropomyosin increased with age in males, while only MyBP-C and troponin-T increased with age in females. (A) MyBP-C phosphorylation increased moderately with age in males and dramatically with age in females; MyBP-C phosphorylation also was greater in middle-aged males compared to females. (B) MLC-1 phosphorylation increased markedly with age in males but not females; values were higher in older males than age-matched females. (C) Desmin phosphorylation increased with age in males only; values were significantly higher in older males compared to older females. (D) Tropomyosin phosphorylation increased with age in males only. (E) Phosphorylation of troponin-T increased with age in females only and was significantly higher in older females compared to older males. (F) Phosphorylation of troponin-I was not affected by either age or sex. Values denote the mean ± SEM in each case. Data were analyzed with two-way ANOVA (age and sex were main factors; the post-hoc test was Holm-Sidak). The * denotes p < 0.05. Samples were hearts from 5 middle-aged male mice, 6 older males, 5 middle-aged females and 5 older females, respectively. Actin was used as a loading control; it was not affected by either age or sex (not shown). In all cases data were normalized to actin. Myosin-binding protein C (MyBP-C), myosin light chain-1 (MLC-1). www.nature.com/scientificreports www.nature.com/scientificreports/ and age (r = 0.828) contributed significantly to the higher phosphorylation levels for MLC-1 in males (Fig. 8A). Frailty and age also contributed significantly to higher phosphorylation levels for desmin (r = 0.735 for frailty and r = 0.724 for age) and tropomyosin (r = 0.779 for frailty and r = 0.645 for age; Fig. 8B,C). Unlike males, there were no correlations between phosphorylation of MLC-1, desmin or tropomyosin and frailty index scores in hearts from females (Fig. 8D-F). Phosphorylation levels for MyBP-C, troponin-T and troponin-I were not related to frailty scores in either sex ( Supplementary Figure 2A-F). Together these results show that phosphorylation of key myofilament proteins increased as frailty increased in hearts from males but not females. This indicates that age-dependent increases in the phosphorylation of key myofilament proteins were graded by overall health, as quantified in a frailty index, but only in hearts from male animals. www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion This study evaluated the impact of age, sex and frailty on cardiac contractile function and explored underlying mechanisms that regulate contraction in the murine heart. Studies in isolated ventricular myocytes showed that calcium transients declined and slowed between middle age and later life in both sexes. By contrast, contractions in myocytes and in intact hearts were relatively unaffected by age. Myofilament analysis showed that actomyosin Mg-ATPase activity increased with age at physiological calcium concentrations in hearts from females, whereas myofilament cooperativity as represented by Hill coefficients declined in males. Age was accompanied by changes in the phosphorylation levels of several major myofilament proteins. However, the patterns of change differed between the sexes, and age effects were much more prominent in males. These age-associated changes in cooperativity and myofilament phosphorylation were correlated with and graded by the level of frailty in males. By contrast, no relationship between frailty and the myofilament parameters measured in this study was seen in females. Our work highlights the substantial heterogeneity in the impact of age on myofilament proteins, especially in male hearts, and show that both frailty and chronological age contribute significantly to this variance. These observations suggest that differences in overall health status contribute importantly to the impact of age on the heart. Even so, these modifications are sex-specific and are most apparent at high levels of frailty in males only. The results of the present study showed that age had no effect on peak contractions recorded from field-simulated ventricular myocytes from mice of both sexes. This agrees with results of previous studies in field-stimulated ventricular myocytes from aged rodents when compared to young adult males [23][24][25] and females 26,27 . Here, we have extended these observations to show that contractions did not change between middle age and later life, even though calcium transients declined and slowed in field-stimulated cells from mice of both sexes. We also confirmed that baseline contractile function in isolated intact hearts did not change between middle-age and later life regardless of sex 28 . Given that cardiac contraction is proportional to the magnitude of intracellular calcium release 29 , our finding that calcium transients decline with age and contractions do not is unexpected. To explore mechanisms that might maintain contractile performance in aging in the face of reduced calcium availability, we examined myofilament calcium sensitivity. Previous work showed that myofilament calcium sensitivity declined when 2-4-month-old mice were compared to 2-year-old animals, although this study used only males 25 . Prior studies of sex differences in myofilament calcium sensitivity used young animals only and found either no sex difference 30 or lower calcium sensitivity in hearts from males when compared to females 31 . To our knowledge, the present study is the first to investigate sex-dependent changes in myofilament calcium sensitivity in the setting of aging. We found that submaximal actomyosin Mg-ATPase activity increased markedly with age, but this was seen in female hearts only. Critically, higher actomyosin Mg-ATPase activity occurred in females at calcium concentrations within the normal physiological range 32 . Our observation that the influence of age on myofilaments is sex-specific is a key finding. The increase in myofilament calcium sensitivity in the aging female heart is likely to be compensatory and may help preserve contractile function in the face of lower intracellular calcium availability in aging. How enhanced myofilament calcium sensitivity arises in older females is not yet clear. One possibility is that the low circulating estradiol levels seen in older female mice and rats 33 could be involved. In support of this, studies in young, ovariectomized females show that short-term exposure to low circulating estradiol increases myofilament calcium sensitivity in the heart 34,35 . In addition, higher actomyosin Mg-ATPase activity occurs early in a murine model of menopause in which ovarian function is gradually reduced by 4-vinylcyclohexene diepoxide injections 36 . On the other hand, both long-term ovariectomy and longer-term exposure to menopause appear to reduce cardiac myofilament calcium sensitivity in the mouse model 36,37 . Thus, whether low circulating estradiol levels can explain enhanced myofilament calcium sensitivity in naturally aging animals is not clear and may be dependent on the duration of the estradiol deficiency; additional experiments that explore this question would be of interest. We also found that age had very little effect on actomyosin Mg-ATPase activity in males, so other regulatory mechanisms that could maintain contraction in the face of lower calcium availability were explored. As age reduces circulating testosterone levels in older mice 33 , the present results suggest that low testosterone may have little effect on myofilament calcium sensitivity. This agrees with our earlier work in male mice where we showed that gonadectomy did not influence myofilament calcium sensitivity at physiological calcium levels 38 . However, the present study did show that Hill coefficients declined with age in males. The Hill coefficient represents the steepness of the actomyosin Mg-ATPase-calcium relationship, where a larger value indicates positive cooperativity of calcium activation 39 . Cooperativity has been attributed to a variety of mechanisms, all of which involve tropomyosin either directly or indirectly 39 . Interestingly, we found that phosphorylation of tropomyosin increased with age in males but not females. This may influence the degree of cooperativity observed in aging male hearts. In support of this mechanism, there is evidence that cooperativity declines when tropomyosin is phosphorylated in reconstituted cardiac muscle fibres 40 . Thus, it is possible that enhanced phosphorylation of tropomyosin reduces the cooperativity of calcium activation. However, this change is likely to disrupt rather than preserve contractile function in the aging male heart. We observed sex-specific changes in phosphorylation of myofilament proteins with age. For example, hearts from older males exhibited an increase in the phosphorylation of both desmin and MLC-1. Desmin is a myofilament protein that provides a scaffold to preserve cardiac myocyte structure and protect the heart from stressors such as mechanical stress 41 . Phosphorylation of desmin is thought to facilitate desmin misfolding, which may generate toxic protein aggregates that have been implicated in the pathogenesis of diseases such as heart failure 42 . The dysfunctional role of desmin phosphorylation may be mediated through the accumulation of these amyloid-like oligomers, which increase cytoskeletal cell stiffness and decrease cytoskeletal viscosity thereby presenting a physical impediment to contractility 42 . Thus, elevated desmin phosphorylation may be maladaptive in the aging male heart. We also observed increased MLC-1 phosphorylation in aging male hearts. Although relatively little is known about its role in cardiac contraction, hypophosphorylation of MLC-1 in a zebra fish model www.nature.com/scientificreports www.nature.com/scientificreports/ disrupts myocardial force generation and increases the heart's susceptibility to stress 43 . Thus, this increase in MLC-1 phosphorylation may represent a beneficial effect to compensate for deleterious effects of desmin hyperphosphorylation in aging male hearts. We also found a significant increase in troponin-T phosphorylation in aging females but the significance of this is unclear. Although troponin-T can be phosphorylated at several different sites, this appears to have little impact on myofilament function 44,45 . Our work showed that MyBP-C phosphorylation increased with age in both sexes. MyBP-C regulates cardiac contractile function and is controlled by phosphorylation through multiple signaling pathways, although its contributions to heart function are not fully understood 46 . Interestingly, dephosphorylation of cardiac MyBP-C is a common finding in diseases of contractile dysfunction, such as heart failure, in both humans and animal models 47,48 . In addition, transgenic mice with non-phosphorylatable MyBP-C exhibit impaired systolic and diastolic function 49 . Together these findings indicate that chronic dephosphorylation of MyBP-C disrupts myocardial contractile function. As phosphorylation of cardiac MyBP-C induces a conformational change in myosin that promotes cross-bridge formation 50 , an increase in MyBP-C phosphorylation may be a compensatory mechanism that helps preserve cardiac contractile function in aging. While MyBP-C phosphorylation increased with age and maybe be a compensatory mechanism in both sexes, opposing phosphorylation changes that impair myofilament function (e.g. desmin phosphorylation) in males may off-set beneficial alterations in the contractile apparatus. Frailty is a key determinant of overall health status and mortality in mice of similar chronological ages, as it is in humans 13 . We have previously shown that age-associated adverse remodeling in the atria and ventricles is graded by the level of frailty in male mice 10,51,52 . This suggests that cardiac aging and overall health are closely linked, at least in males. A novel and important finding in the present study is our observation that, while most age-associated changes seen in male hearts were graded by frailty, none of the age-dependent changes in females exhibited any clear relationship with frailty. As identified by Maric-Bilkan et al. 53 , it is critically important to conduct basic research in cardiovascular biology in animals of both sexes to address the knowledge gap in how sex can affect research outcomes. Our results highlight the importance of using both male and female animals in such studies, as very different conclusions would have been reached had only one sex been used here. Taken together, our data suggest that poor overall health, quantified in a frailty index, predicts adverse changes and post-translational modifications in myofilaments in the aging male heart but not in the aging female heart. Some of these changes, such as the increased phosphorylation of MLC-1 at high frailty levels, may be compensatory and help preserve contractile function in hearts from frail older men. Still, enhanced phosphorylation of desmin and tropomyosin, as well as smaller Hill coefficients, would be expected to negatively affect heart function and may ultimately promote the development of diseases of impaired contractility in frail older men. Our findings also suggest that females may be more resilient than males to the effects of poor overall health on the heart. We have demonstrated that intracellular calcium availability declined between middle age and later life in myocytes from male and females, but contractions were preserved. Contractile function was maintained in aging females by an increase in myofilament calcium sensitivity and enhanced phosphorylation of MyBP-C. By contrast, there was no compensatory increase in myofilament calcium sensitivity in males, although enhanced phosphorylation of MLC-1 and MyBP-C in males may help preserve contractile function in aging. Still, the elevated phosphorylation of tropomyosin and desmin, as well as the decrease in positive cooperativity, would be expected to ultimately impair contractile function in older males. We found that the impact of age on myofilament proteins was heterogenous in male hearts and our results demonstrate that both frailty and chronological age contribute significantly to this variance. This suggests that there is a link between cardiac aging and overall health in aging males, while older females may be resistant to the adverse effects of frailty on the heart. These findings may help explain the so-called morbidity-mortality paradox, where older women have higher levels of frailty than men at any age, but live longer 54 . Further exploration of the mechanistic basis for sex-specific changes in aging and frailty is motivating additional inquires by our group. Mouse clinical frailty assessment. Overall health was assessed with a mouse clinical frailty index tool. Animals This instrument is an index of 31 potential deficits in health that can accumulate with age in C57BL/6 mice 20 . This tool evaluates deficits in overall health (e.g. the integument, musculoskeletal system, vestibulocochlear/auditory systems, ocular/nasal systems, digestive system, urogenital system, respiratory system, signs of discomfort, body weight and body surface temperature) and none of the potential deficits is a measure of cardiovascular health per se. This non-invasive instrument is validated and reliable, as described previously 55,56 . Mice were assessed in a quiet room after they had acclimatized for approximately 10 minutes. They were individually evaluated for the presence of 31 potential deficits. For each item, mice without the deficit received a score of 0, those with a mild deficit received 0.5, and those with a severe deficit scored a 1. Values for each deficit were then added and divided by the total number of deficits assessed to produce a frailty index score that could theoretically be between 0 and 1. Cells were loaded with calcium-sensitive dye (fura-2 AM, 2.5 μM; Invitrogen, Burlington, ON) for 20 minutes in the dark on the stage of an inverted microscope (Nikon Eclipse TE200, Nikon Canada, Mississauga, ON). Cells were superfused (3 mL/min) with buffer (mM): 145 NaCl, 10 glucose, 10 HEPES, 4 KCl, 1 CaCl 2 , and 1 MgCl 2 (pH 7.4) at 37 °C. Cell shortening and calcium transients were simultaneously recorded by splitting the microscope light between the camera (model TM-640, Pulnix America) and photomultiplier tube (PTI, Brunswick NJ, USA) with a dichroic mirror (Chroma Tech. Corp. Rockingham, VT). Cells were viewed on a closed-circuit television monitor linked to a video edge detector (Crescent Electronics, Sandy, UT) to measure cell length (120 samples/sec). Calcium transients were measured with a DeltaRam fluorescence system and Felix software (Photon Technologies International, Birmingham, NJ). Cells were alternately excited at 340 and 380 nm and fluorescence emission at 510 nm was recorded for both wavelengths (200 samples/sec). Recordings were background corrected and emission ratios were converted to calcium concentrations with an in vitro calibration curve as in our earlier studies 57,58 . Cells were field-stimulated at 4 Hz with bipolar pulses delivered through platinum electrodes via a stimulus isolation unit (Model # SIU-102; Warner Instruments, Hamden, CT) controlled by pClamp 8.1 software (Molecular Devices, Sunnyvale, CA). Langendorff-perfused heart studies. Langendorff-perfused heart studies were conduced as we have previously described 33 . In brief, mice were weighed and anesthetized as described above. Hearts were excised, cannulated on a Langendorff apparatus (Radnoti LLC, Monrovia, Ca, USA) and perfused at constant pressure (80 ± 0.5 mmHg; 37 °C) with the following buffer (mM): 126 NaCl, 0.9 NaH 2 PO4, 4 KCl, 20 NaHCO 3 , 0.5 MgSO 4 , 5.5 glucose, and 1.8 CaCl 2 (95% O 2 , 5% CO 2 ; pH 7.4). A fluid filled balloon was inserted into the left ventricle via the left atrium and inflated to 5-10 mmHg. Pressure was recorded with a pressure transducer and PowerLab 8/35 data acquisition system (ADInstruments, Colorado Springs, CO, USA). Data were analyzed with LabChart 7 software (ADInstruments). Left ventricular pressure was measured to quantify LVDP and the maximum rates of pressure development (+dP/dt) and decay (−dP/dt). Hearts were allowed to stabilize for 20-30 minutes before recordings were made; responses over a 10-minute period were averaged. Myofilament studies. Myofilaments were isolated with our established techniques 59 . Briefly, mice were anesthetized with sodium pentobarbital (described above) and their hearts were removed. The ventricles were weighed, flash frozen in liquid nitrogen and stored at −80 °C. Tissue was homogenized in ice-cold buffer (mM): 60 KCl, 30 imidazole (pH 7.0), 2 MgCl 2 , 0.01 leupeptin, 0.1 PMSF, 0.2 benzamidine, and phosphatase inhibitors (P0044, Sigma-Aldrich) and centrifuged (14,000 g; 15 min; 4 °C). The pellet was re-suspended in the homogenizing buffer supplemented with 1% Triton X-100 (45 min, on ice). This solution was then centrifuged (1,100 g; 15 min; 4 °C) and the myofilament pellet was washed three times in ice-cold buffer and re-suspended in homogenizing buffer. Myofilaments were either flash frozen (for subsequent myofilament protein phosphorylation assays) or kept on ice and used immediately to assess actomyosin MgATPase activity. Myofilaments (25 µg) were incubated in ATPase buffers supplemented with increasing concentrations of free calcium (10 min; 32 °C) to quantify actomyosin MgATPase activity, as we have previously described 60 . Reactions were quenched with 10% trichloroacetic acid and then equal volumes of FeSO 4 (0.5%) and ammonium molybdate (0.5%) in 0.5 M H 2 SO 4 were added. The production of inorganic phosphate was measured as the absorbance at 630 nm. Myofilament protein phosphorylation was assessed with our established techniques 59 . Briefly, myofilament proteins (10 µg) were separated with SDS-PAGE (12%) and fixed in 50% methanol-10% acetic acid (23 °C) overnight. ProQ Diamond staining was used to assess myofilament protein phosphorylation (Molecular Probes, Eugene, OR). Gels were imaged with a Bio-Rad ChemiDoc MP Imaging System (Bio-Rad Laboratories Ltd., Mississauga, ON) and they were analyzed with ImageJ (NIH, Bethesda, MD, USA). The protein load of each gel was determined by Coomassie staining, after the ProQ Diamond staining and imaging. Actin was selected to represent protein load as we have done previously 38 . To permit comparisons across gels, an equal amount of protein standard was loaded in multiple lanes of each gel. The protein standards (Bio-Rad 161-0374) at 25 and 75 kDa are visible during ProQ Diamond imaging, allowing for standardization across all gels. These standards showed equal fluorescence across all gels (<3% variation at most). Statistics. Data are expressed as mean ± SEM unless otherwise indicated. The effects of age and sex on each outcome were compared with a 2-way ANOVA followed by Holm-Sidak post-hoc tests. When actomyosin Mg-ATPase activity was plotted as a function of calcium concentration, male and female groups were analyzed separately with a two-way repeated measures ANOVA, with age as the main factor and calcium concentration as the repeated measure and pairwise multiple post-hoc comparisons with a Tukey test. We evaluated relationships between various parameters and frailty with linear regression analysis. When both age and frailty were significantly related to a parameter under study, we used multivariable regression and calculated semi-partial correlations to assess their separate contributions. Sigmaplot software (v15.0, Systat Software Inc.) and SPSS software (v21.0) were used for all statistical analyses and Sigmaplot was used to construct graphs. P values of less than 0.05 were considered significant. www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/
8,026.2
2020-06-22T00:00:00.000
[ "Medicine", "Biology" ]
QUANTITATIVE COMPARISON BETWEEN NEURAL NETWORK- AND SGM-BASED STEREO MATCHING Abstract. Over the last decades, various methods for three-dimensional detection of the environment have been developed and successfully used. This work considers classical stereo methods, which can determine depth information by the means of correspondence analysis on the basis of two pictures of a scene. Recently, neural networks have been used to solve correspondence analysis. These procedures came first places on corresponding benchmarks and are ahead of many already established solutions. In this work, images captured by the ZED camera are evaluated for accuracy of the depth maps generated by several approaches. This includes modern methods based on neural networks. INTRODUCTION AND MOTIVATION For the 3D capture of scenes and their semantic interpretation, there are at the moment a number of systems for reconstruction. A standard procedure relies on the use of stereo sensors. The central point is the correspondence analysis (stereo matching) between the left and right images. Classical methods, such as pixel correlation, result in the use of environmental information to smear edges. Global procedures could help, but computationally expensive. Therefore, the approach of (Hirschmuller, 2005) with the semi-global matching introduced a paradigm shift. This algorithm provides dense disparity maps in image resolution with sharp edges and near real-time. Lately, new algorithms based on neural networks for solving this task have become available. In this paper the different approaches are compared by using real image data. As a reference system, the stereo camera ZED created by stereolabs was used. RELATED WORK Stereo matching and disparity computation is a way of reconstructing depth from at least two images, captured from two different view points or viewing angles. In photogrammetry the desire to measure depth has evolved ever since the invention of photography and cameras. Early photogrammetric models compute the disparity on a point pair basis. With the rising available computing power, researchers wanted to compute dense disparity maps from two cameras. This introduces many problems. Since, in the best case, the stereo reconstruction is working unsupervised, distinctive pixel similarity measures have become extremely important, for instance (Birchfield, Tomasi, 1998). However there were still a lot of problems regarding the different view points, for instance different illumination angles, occlusion, spot lights, smooth and texture-free surfaces, etc. (Scharstein, Szeliski, 2002) have introduced a database of stereo scenarios, called the Middlebury database. They also compared different methods for their accuracy and density . * Corresponding author One of the first really successful methods was presented by Hirschmüller (Hirschmuller, 2005). It works locally as well as semi-globally on the image by sampling from the pixels to different directions and optimising the matching in the neighbourhood of pixels. Researches ever since have worked on different aspects of the stereo matching problem. For instance (Hermann, Vaudrey, 2010) improved the pixel similarity. These methods use classical computer vision methods. Ever since the rise and success of deep neuronal networks, publications also tried to make use of the computational power for the stereo matching problem. (Pal et al., 2012) used a learning approach utilising conditional random fields. One of the first results on the imagenet database (e.g. (Krizhevsky et al., 2012)) lead to the increased research interest and specialised databases like the KITTI vision benchmark suite of (Geiger et al., 2012). Also the dense stereo problem became more and more interesting for autonomous driving applications to foresee the structure of the street, the vehicles in front of the car and other obstacles like pedestrians. The concept of the scene flow can be interpreted as a dense stereo matching problem. In the context of autonomous driving, the scene flow and vehicles were estimated jointly by (Menze et al., 2015). Optical flow and hence scene flow can also be estimated by deep neuronal networks, as in (Dosovitskiy et al., 2015). This was implemented using a contracting and an expanding part in the design of the network layers. Optimisations in the learning speed and better convergence lead to the development of even deeper networks, as in (He et al., 2016). A data set for scene and optical flow computation comparison was provided by (Mayer et al., 2016). It makes use of synthetic data and enables training of more complex networks as the FlowNet, since the DNNs require a lot of stereo data. Improvements on deep stereo matching were made by (Zbontar et al., 2016) and by (Kendall et al., 2017), the latter by providing a learning method to include estimating the whole process of geometry and context. The focus of this article is the evaluation of the dense and deep stereo matching methods presented in (Dosovitskiy et al., 2015, Mayer et al., 2016, Kendall et al., 2017. These networks are called DispNet, DispNetC and GCNet. PRE-PROCESSING AND STEREO SENSOR SDK This article assumes a calibrated stereo camera system. The model description is based on the pin hole camera model, the distortion is described according to the Brown model (Duane, 1971). The object is located in the stereo area and is completely imaged by both cameras (hardly any occlusions). Before matching, the image data of both cameras are rectified into an epipolar geometry. By introducing the ZED camera, the company stereolabs launched a stereo camera on the market in 2015 (Stereolabs, 2015). There are two identical (synchronised) RGB cameras with a 1/3 inch sensor, each pixel is square and has the size of 2µm. The aspect ratio of the sensor is 16 : 9. The optics are specified with a field of view of 110 degrees (F-Number = 2). The camera supports various resolutions and refresh rates, which are listed in Table 1. The maximum possible resolution of a camera is 2208 × 1242; the minimum resolution 672 × 376. The image repetition rate is expected to be dependent on the resolution. Both cameras are arranged in parallel (stereo normal case) and stand at a distance of about 12cm to each other. There is a SDK available, which requires a CUDA capable graphics card. In addition to camera control, the SDK provides the following functions: The stereo-matching and the Depth Perception methods can be used both in-and outdoor in a range of 0.5m to 20m. The results are created in real time. The disparity map has the same resolution as the captured image. The quality and frequency of the output disparity maps can be controlled by using three predefined modes (Performance, Medium, Quality). MATCHING APPROACHES We limited ourselves to various derivations of semi-global (block) matching and to methods with neural networks (Hirschmuller, 2005). The choice fell on the networks DispNetC and DispNet, which were in 16th place on the KITTI benchmark at this time (Geiger et al., 2012). The decisive criterion was the availability of the source code and a ready-made model for a comparison. Semi-Global Matching This method minimises a global cost function. In order to reduce the computational effort, different paths leading from the edge of the image to each pixel are considered and the required costs for each step are determined by the sum of two terms. The first term describes the direct matching cost, while the second term penalises different disparities of the neighbour pixel in that direction. In doing so, it differentiates between the same or similar disparities and disparity jumps, whereby the path with the lowest cost is selected from this set. In this work we consider the implementation provided by OpenCV with 5 or 8 paths (Itseez, 2015). DispNetC The neural network DispNetC is a network which is trained end-to-end and consists of two parts: A Contracting Part and an Expanding Part (Dosovitskiy et al., 2015, Mayer et al., 2016). An integral part of the Expanding Part is the Correlation Layer introduced by the authors. This merges the feature vectors of both images and passes the result to the Expanding Part. As input, the network receives a rectified and normalised stereo image pair. The output is the disparity map. DispNet DispNet is a simplified form of DispNetC. Instead of an individual treatment of the right and left image, the two images are superimposed and edited together by the network. Compared to DispNetC, the correlation layer and the independent processing of the stereo images are abandoned. The number of convolutional layers or convolutional transposed layers in the contracting and expanding part will remain identical. As a result of this change the resulting network no longer specialises in the task of correspondence analysis. Since it does not receive any specifications, the network must learn independently how to extract features from the images, identify correspondences and generate a disparity map from them. Implementation and training of the networks The original authors have implemented the two networks in a modified version of the framework Caffe (Jia et al., 2014) and trained them with the SceneFlow data set FlyingThings3D (Mayer et al., 2016). Note that we did not add own data. METHOD FOR COMPARISON OF STEREO SYSTEMS In order to test the methods for typical problems of correspondence analysis, scenarios for the determination of depth estimation, interpretation of homogeneous surfaces, edge transitions and round surfaces have been created. With the ZED camera, stereo images were taken of these scenes, which were then examined using the described methods. Experimental setup The ZED camera equipped with a laser rangefinder was mounted to a tripod and aligned with a wall (see Fig. 1a). To make the measurements comparable, a rail was attached to the floor orthogonal to the wall. Along this trail, the tripod and the ZED camera were led along, in order to allow recordings from different distances of up to 4m. The schematic structure is shown in Fig. 1b. Depending on the scene, an additional object was placed in front of the wall. This then counted as a reference point for the distance measurement and reduced the maximum possible distance. Image frames were taken at intervals of 0.5m starting at 1.0m, thus, depending on the scene 6 to 7 images were taken. For the evaluation the disparity maps were generated with all procedures using the rectified images of the ZED camera and then the associated point clouds were generated. 10 disparity maps and point clouds were evaluated for three selected scenes each (Fig. 2): • 6 for the processes of stereolabs, • one for each for DispNet and DispNetC and • two for the variants of SGBM (OpenCV). All pictures were taken with a resolution of 1280 × 720. This was chosen because it was closest to the resolution of the set used for training the SceneFlow data set (resolution: 960 × 540). Scenarios In Fig. 2, the individual scenarios from the perspective of the left camera at 1.5m distance are shown. Scene 1: White area. In this simple case we wanted to test how the procedures could deal with flat surfaces. The camera itself is aimed at a white surface with a slight texture (rough-grained wallpaper). Scene 2: Edge. Disparity leaps always present matching methods with challenges. To put these to a test, a wooden box is placed parallel to the wall. The wooden box is 0.35m wide and the distance from the surface to the wall is 0.49m. Because the wooden box is made of wood, it has a pronounced texture. So one can now evaluate the influence of the texture on the accuracy of the derived 3D structure. Scene 3: Ball. Finally, the reconstruction of curved surfaces should be checked. The object used is an exercise ball. EVALUATION AND RESULTS In this chapter, the methods to be investigated (methods of stereolabs, SGBM, DispNet, DispNetC) are analysed on the basis of the described scenarios. This includes the density of the disparity map itself, the distance estimation, reconstruction of a sharp edge and a round object. First, we will consider the dot density of the generated disparity maps. Then the distance values recorded for scenes 1 and 2 are being compared with the measured values. Finally, the reconstruction of the sphere is being considered. In the best case scenario, the coverage with 3D points is equivalent to the full number of image pixels. In general, the DispNet, DispNetC and ZED camera algorithms achieve 100% coverage in FILL mode (hole fill filter) (see Table 2). All other methods have holes or undefined areas. Fig. 3 shows that the algorithms have difficulties at the edges. Density of the depth map The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-1/W2, 2019 Evaluation and Benchmarking Sensors, Systems and Geospatial Data in Photogrammetry and Remote Sensing, 16-17 Sept. 2019, Warsaw, Poland Distance measurements (scene 1 and 2) Basically, one can say that none of the methods was able to determine the distances correctly and that the error increases (squared) with the distance. As shown in Figure 4, all processes are subject to variations, and the processes of stereolabs are much more reliable. When the surface is well textured, the variations in all processes will decrease. It is noteworthy that the processes of stereolabs then fluctuate only for a few centimetres around the mean (1-2cm, see Fig. 4b), whereas all other processes are subject to greater fluctuations from a distance of 2.5m (up to 7cm). Edge reconstruction The following section will show how well the procedures are able to handle disparity leaps. Fig. 5b shows a highlight on the surface, which causes problems for some reconstruction methods. A good result is generated by DispNet and DispNetC. The edge is clearly visible in Fig. 5c and 5d, the surface also appears homogeneous and is barely affected by the highlight. In addition, the hidden area behind the edge is correctly assigned to the background. The procedure of stereolabs produces good results (see Figs. 5e to 5g). The highlight has, depending on the mode, only little or no effect on the result. However, if one needs a complete disparity map and thus activates the FILL mode, an error occurs, which is produced by the highlight (see Figure 5h to 5j). It should be noted that the filters used by stereolabs reduce the error with increasing quality of the depth map. Reconstruction of a sphere The evaluation of the recordings showed that none of the algorithms could reconstruct the round ball (see Figure 6). The surface of the ball was interpreted rather as a plane. This applies to all procedures and distancing. The best visual reconstruction was done by DispNet and ZED QUALITY (the stereolabs process using the highest quality level) at a distance of 1.5m (see Figures 7). Finally, one has to conclude that it is not possible to correctly determine the point correspondences by using these methods. The cause is the largely texture-free and shiny surface, which contains many similar points. SUMMARY AND OUTLOOK In this work different stereo-matching methods were considered. These range from classic methods to proprietary ones, as well as to those based on neural networks. Special attention was paid to the stereo camera ZED implemented by stereolabs. On the other hand, methods based on neural networks (DispNet, DispNetC) were established and compared with the method implemented in OpenCV (Semi-Global block matcher). All procedures have been evaluated in problem specific scenarios. It can be concluded from the analysis that all procedures are affected by the typical problems of the correspondence analysis. This concerns in particular the problems in the reconstruction of homogeneous surfaces, highlights and occlusions. In general it can be stated that good texturing is beneficial and that the quality decreases with an increasing distance. The optimal working range for the ZED stereo camera seems to be in the near range (<2m). The best way to do it, is using stereolabs. Due to the adjustable quality, depending on the application, the result can be adjusted and the error reduced. However, this method has problems with edges (soft transition) and highlights. The result of the neural networks does not satisfy what the ranking in the benchmarks suggests. The DispNet and DispNetC networks deliver solid near-field results, but fall behind the stereolabs approach at a distance of 2.5m. It can be assumed that the result depends heavily on the training data set and the network has been optimised for the benchmark, which is the reason, why a direct transfer to other applications does not produce the desired result. Nevertheless, the nets are able to better deal with problem areas such as edges and highlights. The Semi-Global block matcher (SGBM) had major problems with the recordings made. The results are characterised by large fluctuations and irregular areas. Using texturing, the process can again deliver solid results. All this, however, is a very narrow view on the whole problem. Different methods could behave differently in other situations. But for now, the method implemented by sterolabs works best, which is most likely also due to best knowledge of the sensor and device properties. The investigated methods should be examined in the future in regards to speed, memory consumption and required hardware. For example, a high-performance graphics card is required to train the neural networks. The execution itself can take place with most frameworks but also on the CPU. It would be interesting how big the performance differences between the GPU and the CPU are depending on the resolution. Furthermore, it is obvious that the result of the neural networks depends on the training data. Convolutional networks offer the possibility to work on different resolutions. It would be interesting to take into consideration to what extent the resolution of the training data has an influence on the result and how the result can be improved with specific adaptation of the training data. The introduction of noise, de-calibration in the y-direction, brightness differences in both images, etc. is conceivable.
4,106.2
2019-09-12T00:00:00.000
[ "Computer Science" ]
Wall Thinning Assessment for Ferromagnetic Plate with Pulsed Eddy Current Testing Using Analytical Solution Decoupling Method : The wall-thinning measurement of ferromagnetic plates covered with insulations and claddings is a main challenge in petrochemical and power generation industries. Pulsed eddy current testing (PECT) is considered as a promising method. However, the accuracy is limited due to the interference factors such as lift-off and cladding. In this study, by decoupling analytic solution, a feature only sensitive to plate thickness is proposed. Based on the electromagnetic waves reflection and transmission theory, cladding-induced interference is firstly decoupled from the analytical model. Moreover, by using the first integral mean value theorem, interferences of insulation and the lift-off are decoupled, too. Hence, the method is proposed by calculating Euclidean distances between the normalized detection signal and normalized reference signal as the feature to assess wall thinning. Its effectiveness under various conditions is examined and results show that the proposed feature is only sensitive to the ferromagnetic plate thickness. Finally, the experiment is carried on to verify this method practicable. Introduction Ferromagnetic plates are commonly used materials in petrochemical and power generation industries. Wall thinning is one critical threat to ferromagnetic plates. It is caused by corrosion under insulation (CUI), flow accelerated corrosion (FAC), or liquid droplet impingement (LDI), and severely affects the structural strength and integrity of ferromagnetic plates [1,2]. Therefore, wall thinning assessment is important. The thickness measurement is an essential early warning method. Whereas the ferromagnetic plates in petrochemical and power generation applications are always wrapped with insulations and externally protected metal claddings. Therefore, it is challenging for the commonly used methods, such as ultrasonic testing (UT) and eddy current testing (ECT), to determine the plate thickness without removing the insulations and claddings [3]. Pulsed eddy current testing (PECT) provides a possible solution. PECT involves excitation by a square-wave pulse rather than a sinusoidal waveform. So, it contains a variety of frequency components and large driving electric currents, which allows for non-contact remote sensing [4,5]. Therefore, it could be used to measure the thickness of ferromagnetic plates covered with insulations and claddings. The PECT signal is a complicated coupling response related to many factors [6,7]. including ferromagnetic plate thicknesses, insulations, claddings, and lift-offs (distance from the sensor to the cladding), etc. Thus, decoupling the ferromagnetic plate thickness from other factors is a key problem in PECT. Some features have been proposed to evaluate the plate thickness. Waidelich et al. [8] proved that the peak value, time to zero-crossing (TZC), and lift-off intersection (LOI) of the differential PECT signal could be used to evaluate the thickness. Bieber et al. [9] verified these features experimentally and further proposed that the peak value was proportional to the metal loss extent, and the TZC contained information regarding the flaw depth. Fan et al. [10] showed that the LOI point could be adjusted by varying the rising time of pulse excitation, which was beneficial to extend the measurement range and to increase the measurement sensitivity. Smith and Hugo [11] used the time-to-peak to characterize defects and structural variations in aging aircraft structures, showing that the time-to-peak and defect depth exhibited a quadratic relationship. Xu et al. [12] compared the time-to-peak and peak value for wall thinning assessment, and found that the time-to-peak was superior to the peak value due to its linear relationship with wall thickness. Tian et al. [13] proposed the rising point to identify and quantify the defects, and proved that the rising point was related to the propagation time of electromagnetic waves in metallic plates. However, these features [8][9][10][11][12][13] are all obtained by analyzing special points in PECT signals, they are easily affected by the confounding factors. For example, the peak value [8,9] and the time-to-peak [11,12] are related to the cladding thicknesses [12], and the rising point [13] relates to the sensor lift-off [14]. Cheng [15] discussed magnetic field variation by using an anisotropic magneto-resistive (AMR) sensor embedded differential detector, showing that the signal decay behavior was only relevant to the pipe wall thickness over a limited time after switching off the excitation current. Li et al. [16] studied the magnetic flux change with variable pulsed width excitation, demonstrating that the slope of the relative increase in magnetic flux and pulse width could be used to evaluate the ferromagnetic plate thickness. However, the feature proposed by Cheng [15] cannot be obtained when a coil is used as a detector, which limits its application. The slope of the relative increase in magnetic flux and pulse width [16] are complicated. Therefore, an efficient and easy-to-use signal feature, which is sensitive only to the ferromagnetic plate thickness, is still needed. In this study, a feature only sensitive to plate thickness is proposed through electromagnetic waves theory. The electromagnetic waves theory has been used for PECT analysis in some researches. Waidelich [8] propose features, such as the peak value, TZC, and the LOI, by considering the wave produced by the square-wave pulse was a plane wave. However, the model [8] was imprecise, because the wave produced by the square-wave pulse could be regarded as a plane wave only if the transmission coil radius or plate thickness was sufficiently large [17,18]. Then, Dodd-Deeds model was used to obtain the analytical solution of the PECT problem [19,20]. Fan et al. [21] reinterpreted the model using the reflection-transmission theory, which provided it a clearly physical meaning. According to the previous studies, the reflection and transmission of electromagnetic waves in the ferromagnetic plates covered with insulations and claddings are further studied, and by decoupling the analytical solution, a feature which is only sensitive to the thickness is introduced in this study. The rest of this paper is organized as follows: Part II describes the modeling of the insulated ferromagnetic plate, and the theoretical analysis for decoupling the interference caused by claddings, insulations, and lift-offs. In Part III, the similarity measurement based feature is presented, and in part IV, the validity of the feature is proved by experimental study. In part V, the performances of the feature are examined under various conditions. Finally, a brief conclusion is provided in part V. Theoretical Analysis In this section, the PECT analytical model and its solution are described firstly, and then, the reflection and transmission of the electromagnetic waves is discussed to decouple the cladding-induced interference. Moreover, influences of insulations and lift-off effects to the PECT signal are studied by using the first integral mean value theorem. Solution for PECT Analytical Model As shown in Figure 1, a ferromagnetic plate with an insulation and a cladding is modeled as a four-layered structure for simplicity. Layers from bottom to top represent the air, the plate, the insulation, and the cladding, successively. The layer over the cladding is layer 5, and it is divided into three subregions. The sensor consisting of transmitter and receiver coils with rectangular cross-sections is in section I-II. According to the reflection and transmission theory of electromagnetic waves, the expression of the reflection coefficient at the interface between layers k + 1 and k is: where βk = (α 2 + jωμ0μrkσk) 1/2 ; μrk and σk are the relative magnetic permeability and electrical conductivity of layer k, respectively; j is the imaginary unit; ω is the angular frequency of the sinusoidal harmonic; μ0 is the permeability of vacuum; α can be understood as a wavenumber [5]. Received waves are the superposition of reflection waves from multiple interfaces, thus some scholars have defined a generalized reflection coefficient to represent the superposition of reflection waves [20][21][22]. The generalized reflection coefficient R'k + 1,k is defined as the ratio of the electromagnetic wave reflected at all the interfaces between the layer 1 and k + 1 to the electromagnetic wave incident from the layer k + 1 to k. It satisfies the recursive Equation (2). R '5,4(α) is the generalized reflection coefficient of the four-layered structure which could be derived: For each frequency component, the induced voltage in the receiver coil could be deduced based on the Dodd-Deeds model [23]: where I(ω) is the amplitude of the harmonic excitation current; S(α) is the spatial frequency spectra of the sensor which gives the amplitude of the contributions as a function of α [21]. denotes the first-order Bessel function; l1 is the sensor liftoff, e −2αl1 is the lift-off coefficient; n is the coil turns number, r1, r2, and (l2 − l1) are the inner radius, outer radius, and the coil height, respectively; the subscripts T and R label the transmitter and receiver coils, respectively. Substituting Equations (1)-(3) and Equation (5) into Equation (4), the induced voltage in frequency domain can be obtained. As the square-wave excitation current of PECT could be theoretically represented by superimposing a series of sinusoidal harmonics in the frequency domain. The PECT signal could be derived from a sum of harmonic responses in the frequency domain through using an inverse discrete Fourier transform (IDFT), the expression of which is where ts denotes the s-th point in time, m denotes the m-th sinusoidal harmonic, N is the number of sampling point. Decoupling the Cladding-Induced Interference As shown in Equation (1), the solution of the PECT signal is mainly depended on three parameters: the sensor spatial frequency spectra, S(α); generalized reflection coefficient of the four-layered structure, R'5,4(α); and lift-off coefficient, e −2αl1 . According to Equations (1)-(3), R '5,4(α) is determined by the ferromagnetic plate, the insulation and the cladding. e −2αl1 is related to the sensor lift-off. Then through analyzing R '5,4(α) and e −2αl1 , the interference factors could be discussed. As R '5,4(α) indicates that the interference caused by the cladding and the insulation are coupled with the ferromagnetic plate, thus, R'5,4(α) is discussed firstly. To further decouple the interference caused by the cladding, the electromagnetic wave propagation in layers 2 and 3 is studied with the same method, and the generalized reflection coefficient R'4,3 is rewritten as follows: where the first term, R4,3(α), could be expressed as R4,3(α) = (μr3β4-μr4β3)/(μr3β4 + μr4β3). As the insulation is always composed of non-conducting material, such as rock wool or foamed glass, then μr3 is also related only to the cladding material. If Equation (8) is completely substituted into Equation (7), the expressions of R'5,4(α) will become very complicated, which is not conducive to decoupling. So, Equation (8) is firstly substituted into the first R '4,3(α) in Equation (7) where, (d3 − d4) is the thickness of the cladding, R5,4(α) and R4, 3(α) are only related to cladding material. The R'4,3(α) that still remains in Equation (9) can be approximated by R4,3(α), since the first reflection was much stronger than that of the other reflections, and R4,3(α) significantly influences the value of R '4,3(α). Thus, the equations in the first line of Equation (9) is only related to the material and thickness of the cladding. Substituting Equation (9) into Equation (4), △U could be divided into two terms: where according to Equation (9), △U1 is only related to the cladding, and it is the cladding-induced interference signal. △U2 contains the information regarding the ferromagnetic plate thickness, which is the desired signal. In addition, according to Equations (1)-(3), △U2 is also influenced by the cladding, while the interference is negligible. This will be demonstrated in Section 4. Influence Analysis of Insulation and Lift-Off Based on the analysis above, △U2 is the signal which the cladding-induced interference has been decoupled. However, as shown in Equation (10), △U2 is still affected by the insulation thickness, (d2 − d3), and the sensor lift-off, l1. Therefore, methods for reducing these interferences should be further studied. Similarity Measurement Based Feature As indicated above, the original signal, △U, could be decomposed into two terms △U1 and △U2, where △U1 is the cladding-induced interference, and △U2 contains the information regarding the plate thickness. For △U1, differential methods or similarity measurement methods could be used to eliminate it. However, the differential method is always used as a pre-processing method [8][9][10][11][12]. It cannot obtain the thickness measurement feature directly. Therefore, the similarity measurement method is selected herein. In addition, as △U2 is approximately inversely proportional to the insulation thickness and sensor lift-off, (l1 + d2 − d3) can be reduced by dividing two signals. Normalization method is used to eliminate the influence of interfering factors. Then a feature based on the similarity measurement of the normalized PECT signal is proposed in this study. The similarity of the signals is always measured by the distance information. It could be calculated through the Euclidean distance, angular separation, correlation coefficient et. al. In this paper, the Euclidean distance is used. Then, the feature could be calculated as follows: where Dis is the Euclidean distance, △Unorr is the normalized reference signal, and △Unor is the normalized calibration signal or the detection signal. Moreover, the reference signal is the signal of the defect-free plate, the calibration signal is the signal of the plate with a known thickness, and the detection signal is the signal of the detected plate. To examine the feature detailly, a 16 Mn steel step wedge plate is used. The thicknesses of the 16Mn steel step wedge plate are 25.4 mm, 21.5 mm, 20.1 mm, 16.7 mm, and 14.8 mm. The thicknesses of the insulation and the cladding located above the plate are 40 mm and 0.5 mm, respectively. The cladding is composed of galvanized steel sheet (GS cladding). The sensor parameters are listed in Table 1. The lift-off of the sensor is 5 mm, and the amplitude, duty cycle, and period of the square-wave current are 4 A, 50%, and 1 s, respectively. Moreover, the signals of the 16Mn steel plate are calculated based on the analytical model described in Section 2.1. In the calculation, the relative magnetic permeability and conductivity of the 16Mn steel plate are 500 and 1.6 MS/m, respectively, and those of the GS cladding are 300 and 2.0 MS/m, respectively. As △U shown in Equation (1) is expressed as an integral of Bessel functions which is cumbersome and complex, the truncated region eigenfunction expansion (TREE) method presented in [26] is applied to calculate the signals. (15), Dis are obtained. The results are shown in Figure 3. It shows that the plate thickness is monotonically related to the Euclidean distances, then it can be used for thickness measurement. Experimental Study Experimental study is conducted to further discuss the feature. Figure 4 illustrates the experimental set-up. Similar to the set-up shown in Section Ⅲ, a 16 Mn steel step wedge plate with thicknesses of 25.4 mm, 21.5 mm, 20.1 mm, 16.7 mm, and 14.8 mm is used, of which the reference thickness is 25.4mm, and plates with other thickness are used to simulate the uniform wall thinning. A 40 mm thick plastic plate and a 0.5 mm thick galvanized steel sheet are attached on the 16 Mn steel plate to simulate the insulation and cladding, respectively. A sensor with the parameters shown in Table 1 is placed over the cladding. The sensor lift-off is 5 mm. A square-wave voltage signal is generated by a function generator, and subsequently converted to a current signal and amplified using a power amplifier. The amplified square-wave current signal is provided to the transmitter coil. The induced voltage of the receiver coil is amplified by a preamplifier, then digitized using a data acquisition card. A computer is used to display the detection signal. The amplitude, duty cycle, and period of the square-wave current are the same as the ones provided in Section 3. All the normalized experimental signals can be obtained by dividing each signal with its own maximum value, and are substituted into Equation (16), where the reference signal △Unorr is obtained from the 25.4 mm thick plate, while each △Unor can be obtained from plates with other thicknesses. The obtained Euclidean distances are presented in Figure 5a, and the errors between the experimental and theoretical results are shown in Figure 5b. As shown in Figure 5b, the maximum relative error is less than 5.0%, which indicates the Euclidean distance obtained by the analytical model is quite accurate. Therefore, the Euclidean distance obtained experimentally is replaced by those calculated from the analytical model in Section 5. Furthermore, to illustrate that the feature is viable for thickness measurement, two detection signals with the real thicknesses of 23.5 mm and 18.4 mm are provided. The Euclidean distances between the detection and reference signals are 0.0353 and 0.1820, respectively. The signals obtained from 21.5 mm, 20.1 mm, 16.7 mm, and 14.8 mm thick plates are used as calibration signals, and fitting the curve of the Euclidean distances and the calibration thicknesses with a second order polynomial, a calibration equation is obtained as follows: where x denotes the Euclidean distance, and (d1 − d2) is the ferromagnetic plate thickness. Substituting the Euclidean distances of the detection signals into Equation (17), the calculated thicknesses are 23.6 mm and 18.3 mm, respectively. The relative errors between calculated and real thicknesses are 0.42% and 0.54%, respectively. This indicates that the feature is feasible for the ferromagnetic plate thickness assessment. Discussion To demonstrate that the proposed feature is independent of confounding factors, such as the cladding, the insulation, and the sensor lift-off, changes in the feature with the confounding factors are discussed. The plates thicknesses examined in this section are 25.4 mm, 23.5 mm, 21.5 mm, 20.1 mm, 18.4 mm, 16.7 mm, and 14.8 mm. The other calculation parameters are the same as those in Section 3, and for the convenience of comparison, only the curves of the Euclidean distances with the plate thicknesses are displayed. Variation in Cladding Parameters Firstly, the Euclidean distances-thicknesses curves under different cladding materials and thickness are discussed. Figure 6a is the Euclidean distances-thicknesses curves with and without the GS cladding, and Figure 6b is the errors between them. As shown in Figure 6b, the maximum error is 5.78%. It could be concluded that the feature is helpful to reduce the cladding-induced interference. This result is consistent with the analysis in Section 2. Furthermore, except for the GS cladding, a stainless-steel sheet cladding (SS cladding) is also commonly used in petrochemical and power generation applications. Then, performances of features under SS cladding are studied. Figure 7 shows features obtained with and without a 0.5 mm thick SS cladding. They are calculated through the analytical mode by setting the relative magnetic permeability and conductivity of the SS cladding to 1 and 1.35 MS/m, respectively. As shown in Figure 7, for the SS cladding, the maximum differences in Euclidean distance could be calculated by (0.3877-0.3809)/0.3877 = 1.8%. This indicates that the similarity measurement based feature is also available for SS cladding. Thus, the feature is insensitive to cladding materials. In addition, except for the cladding materials, features with different cladding thicknesses are also analyzed. According to a previous study [24], the designed cladding thicknesses are always between 0.3 and 0.7 mm, so 0.3mm, 0.5mm, and 0.7mm thick claddings are examined in this study. Moreover, as the difference in Euclidean distances obtained with and without the GS cladding is larger, the GS cladding materials is used herein. Figure 8 shows the Euclidean distances with different GS cladding thicknesses. The maximum relative errors of the Euclidean distances obtained from the 0.3mm and 0.5mm thick cladding is calculated as (0.3763 − 0.3653)/0.3653 = 3.01%, and that for the 0.7mm and 0.5mm thick cladding is calculated as (0.3516 − 0.3653)/0.3653 = 3.75%. They are both small, which indicates that the Euclidean distance is independent of the cladding thickness. In conclusion, the similarity measurement based feature is insensitive to the cladding materials and the thicknesses. Variation in Sensor Lift-Off And Insulation Thicknesses As the interference caused by the sensor lift-off could be easily masked as a defect signal. To improve the thickness assessment accuracy, the feature change with the lift-off variation is investigated. Figure 9a shows Euclidean distances obtained with 5 mm, 10 mm, and 15 mm sensor lift-offs. The Euclidean distances under different lift-offs are basically overlapped. Therefore, the Euclidean distance is independent of the sensor liftoff. In addition, insulation thickness variations caused by installation irregularities or external forces could also affect the thickness assessment accuracy. Thus, the influence of insulation thicknesses on the Euclidean distances is examined. Figure 9b shows the Euclidean distance under different insulation thicknesses. The curves obtained with 40mm, 60 mm, and 80 mm thick insulation are approximately overlapped, which indicates that the Euclidean distance is also independent of insulation thickness. In conclusion, the similarity measurement based feature is independent of the sensor lift-off and the insulation thickness. Conclusions This study proposes an efficient and easy-to-use feature for the thickness assessment of ferromagnetic plates covered with claddings and insulations. The feature is obtained by decoupling the analytical solution, thus is only sensitive to the plate thickness, unaffected by other interference factors. Firstly, the insulated ferromagnetic plate is modeled as a four-layered structure, its solution provides a basis for the theoretical analysis. Secondly, by analyzing the electromagnetic wave propagation in the fourlayered structure through reflection and transmission theory, the cladding-induced interference is successfully decoupled from the PECT signal. In addition, by using the first integral mean value theorem, the inversely proportional relationship between the PECT signal and the insulation thickness as well as the sensor lift-off is deduced. Thirdly, the similarity measurement based feature is proposed for thickness assessment. A 16Mn steel step plate is used as an example to study the performance of the feature. Results show the proposed feature is sensitive only to the plate thickness, and it is independent of the interference factors, including the cladding material and thickness, the sensor lift-off, and the insulation thickness. The feature proposed in this paper is based on the similarity measurement, which effectively suppresses some interference factors. However, to eliminate the influence of environmental noises, new features shall be found based on fuzzy similarity measures to further improve the thickness assessment accuracy [27,28]. The method proposed in this paper will contribute for PECT application in ferromagnetic plate detection. Further study will include the studies for other materials and defects assessment.
5,132.4
2021-05-11T00:00:00.000
[ "Engineering", "Materials Science" ]
Physical Sensing of Surface Properties by Microswimmers – Directing Bacterial Motion via Wall Slip Bacteria such as Escherichia coli swim along circular trajectories adjacent to surfaces. Thereby, the orientation (clockwise, counterclockwise) and the curvature depend on the surface properties. We employ mesoscale hydrodynamic simulations of a mechano-elastic model of E. coli, with a spherocylindrical body propelled by a bundle of rotating helical flagella, to study quantitatively the curvature of the appearing circular trajectories. We demonstrate that the cell is sensitive to nanoscale changes in the surface slip length. The results are employed to propose a novel approach to directing bacterial motion on striped surfaces with different slip lengths, which implies a transformation of the circular motion into a snaking motion along the stripe boundaries. The feasibility of this approach is demonstrated by a simulation of active Brownian rods, which also reveals a dependence of directional motion on the stripe width. , Fp and h eff shown in Table S1. We find that the force for a no-slip surface F 0 x = F max s + Fp > 0 and the force for a perfect-slip surface F ∞ x = Fp < 0. The effective width h eff from the linear approximation of the fluid velocity profile in the gap is comparable, but not equal to the gap width h. The comparison here confirms that Eq. (1) describes the hydrodynamic interaction of rotating spherical and ellipsoidal bodies with surfaces over a wide range of slip lengths very well. The slip length b0, at which the force Fx vanishes, is also shown in Table S1. For a sphere of diameter d = 0.9 µm, one obtains b0 ≈ 65 nm at h = 0.02265 d = 20 nm. Mesoscale hydrodynamics simulations.-We briefly describe the hybrid simulation method here; for details, we refer to Refs. 3,4. The model E. coli has a spherocylindrical body and four left-handed helical filaments and is constructed by particles of mass M . We choose the body length b = 2 − 4 µm, body diameter d = 0.9 µm, flagellar helix radius 0.2 µm, pitch 2.2 µm and angle 30 • from experiments. 5,6 The elastic bending and twist moduli of the filaments are chosen according to the experimental range from about 10 −24 to 10 −21 N m 2 . [7][8][9] The details of the model will be published elsewhere. The solvent is modelled by a collection of point-like particles of mass m. Their dynamics comprises of alternating streaming and collision steps. In the streaming step, the solvent particles move ballistically and the position rs of particle s with velocity vs is updated according to rs(t + ∆t) = rs(t) + vs(t) ∆t with ∆t the time interval between collisions, while the dynamics of the body and flagellar particles is described by the Newton's equations of motion. In the collision step, all particles are sorted into cubic cells of length a and the velocity vi of particle i in cell c is renewed via the collision rule 10,11 v new where vc and rc are the velocity and position of the center of mass of all particles in the cell c, mj the mass of particle j in c, v ran i a random velocity sampled from the Maxwell-Boltzmann distribution, and I the moment-of-inertia tensor of all particles in c. The collision rule (i) conserves both linear and angular momentum locally, i.e. in each collision cell, (ii) includes thermal fluctuations of the solvent, and (iii) maintains a constant temperature. To satisfy Galilean invariance, a random grid shift of the cells is performed before each collision step. 12 Length and time scales.-By mapping the cell-body width d = 9 a to 0.9 µm for swimming E. coli 6 , we obtain the length scale a = 100 nm, which sets the resolution of hydrodynamics in our simulation method. Comparison of the flagellar rotation rate ω f = 0.0356 kBT /ma 2 in our model to the experimental value of 2π × 120 Hz 6,13 leads to the time scale ∆t = 2.4 µs. Simulation setup.-Our simulations are performed in cubic boxes of length L = 120 a with periodic boundaries in the xand y-directions and two planar surfaces implemented at z = 0 and z = L. Additional simulations with L = 150 a are run and the obtained average curvature is consistent with that from L = 120 a within numerical errors, indicating the periodic boundaries do not affect our simulation results. We choose the collision time step ∆t = 0.05 ma 2 /kBT and the solvent density ρ = 10 m/a 3 , leading to the solvent viscosity η = 7.15 √ mkBT /a 2 and the Schmidt number Sc = 20, for which momentum transport dominates over mass transport. Newton's equations of motion for the particles of the E. coli model are integrated with a time step δt = ∆t/25 using the velocity-Verlet algorithm. Slip length of the surface.-We obtain partial-slip surfaces with different slip lengths by randomly mixing no-slip and perfect-slip boundary conditions. Figure S2 shows the slip length b as a function of the mixing ratio p, defined as the probability of applying no-slip boundary condition at each collision step (1 − p for perfect-slip). b is measured from the velocity gradient of the fluid under shear. Table S1. Insets in (a)-(c) are close-up of the first four points. See Fig. 3(b) for the notations. Table S1: Properties of a particle rotating parallel to a nearby surface, as obtained from the fits in Fig. S1. F max s and F p are rescaled by 3πηd 2 ω/2. Movie S2: Simulation animation of E. coli swimming near a no-slip surface. The circular trajectory with clockwise motion is viewed from above the surface, compare also Fig. 2(a). The geometry of the bacterium is the same as in Fig. 1(a) and Movie S1. Movie S3: Simulation animation of active Brownian rod swimming near a striped surface. The trailing path in color (red to blue) represents the swimming trajectory for the past 40 seconds. R − and R + are the radii of curvature for the clockwise and counterclockwise trajectories on the alternating stripes with width L, respectively. The geometry, swimming velocity and diffusion coefficients of the active rod are in agreement with experimental values of E. coli.
1,474.2
2015-05-20T00:00:00.000
[ "Physics" ]
Behavior of metals Induced by magnetic pulse loading The investigation of copper and aluminum ring samples was carried out using magnetic pulse loading. Two modifications of the magnetic pulse technique were used. They were based on a GKVI-300 high-voltage narrow-pulse generator Morozov et al. (2011) [1]. It is possible using these two approaches to decrease the period of the harmonic load up to 100 ns. The study of fracture surfaces of aluminum and copper samples after the test was carried out on an optical microscope AxioObserver-Z1-M in a dark field, and study of the cross sections structure – in the bright field or C-DIC. The structure has been studied in cross sections after appropriate etching. Grain size and the number of pores on the surface of cross sections were determined after etching. Microhardness was measured on a PMT-3 device with a load of 20 g. The optical micrographs of aluminum demonstrate that the long pulse causes almost fully ductile fracture. In the case of the short pulse, the number of fibers decreases: the fracture surface exhibits the signs of both ductile cup fracture and brittle crystalline fracture with cracks, which are sometimes rather deep. In addition, the short pulse results in twinning, which seems surprising for aluminum featuring a high stacking fault energy. It is seen that under short loading dynamic recrystallization occurs. As for copper samples before loading they were in the form of single crystal and after loading their structure due to dynamic recrystallization consists of small grain. The specimen with notch has more developed dynamic recrystallization shear bands. Introduction Zhang and Ravi-Chandar [2][3][4][5] reported recent experimental data for high-speed tangential extension to rupture of thin metallic rings by a magnetic pulse technique.Electro-magnetic loading provided radial expansion rates of the ring in the range 80-200 m/s or strain rates on the order of 10 4 s −1 .Necking and fragmentation were recorded with a high-resolution streak camera.It was shown that the ring fails for a time much shorter than the first period of current oscillations in it.Fragmentation lasted less than 20 µs.The radius of the ring as a function of time was determined photographically; the velocity, by differentiating this function.The ring rapidly picked up a speed of 100-250 m/s and then slowed down when the current decreased and the energy dissipated as a result of plastic deformation. Strain rate ε = r /r was roughly estimated as falling into the interval (0.5-1.3)×10 4 s −1 .In uniformly deforming regions of the ring, the strain was found to never exceed a necking threshold.An increase in the crosssectional area of the sample considerably increased the strain preceding the onset of strain localization.The strain before localization did not depend on the strain rate.In samples with a cross-sectional area ratio of more than 5 the strain localized in the form of shear bands.As the sample grew in size, the strain distribution became more uniform.It was shown that polymeric coatings slightly influence the onset of necking at high-rate deformation.Tubes exhibited uniform expansion to a critical strain with a Corresponding author<EMAIL_ADDRESS>formation of shear bands and failure at the sites of their intersection.Nomenclature is given in Table 1. In this work, the approach suggested in Morozov et al. (2011) [1] is elaborated. Experimental technique Two modifications of the magnetic pulse technique were used.They were based on a GKVI-300 high-voltage narrow-pulse generator providing voltage amplitudes of 30-300 kV [1].They could realized a sinusoidal load with a period of 7.5 µs and with a period of 1 µs. The loading scheme is shown in Fig. 1.A current passing through a solenoid on which a coaxial ring sample is placed induces a current in the sample, and interaction between these currents gives rise to a repulsion force between the ring and solenoid.The solenoid was made of a copper wire 1 mm in diameter.It had five turns and a diameter of 25 mm.The current through the solenoid was measured by Rogowski coil and displayed on digital oscilloscope, and information from the oscilloscope was written in electronic format.When the sample coaxially placed on solenoid broke, photodiode recorded a flash indicating the instant of rupture. The study of fracture surfaces of aluminum and copper samples after the test was carried out on an optical microscope Axio-Observer-Z1-M in a dark field, and study the structure of cross sections -in the bright field or C-DIC.The structure has been studied in cross sections after appropriate etching.Grain size and the number of pores on the surface of cross sections were determined after etching.Microhardness was measured on a PMT-3 device with a load of 20 g.The ductile fracture surface was matt gray with characteristic "fibers."The brittle fracture surface is seemingly crystalline without noticeable signs of plastic deformation.Amount B (%) of the ductile component on the fracture surface was determined by the formula (1) given in the State Standard (X is the percentage of the brittle component on the fracture surface).The surface area of the brittle component was measured on the photograph.Test rings were made of aluminum alloy and had an inner diameter of 28 mm.Rings to be tested with long-period voltages were 0.11-0.40mm thick and 0.5-1.0mm wide, and those subjected to short-period voltages were 0.11 mm in thickness and 1.0 mm in width.It was investigated also rings made of foil 0.03 mm thick and 1.5-2.0mm wide. Results and discussion Figure 2 shows the samples in initial state and those having been subjected to loading under different conditions.Both for the 7.4-µs-long pulse (Fig. 2b) and for the 1.0-µs-long pulse (Fig. 2c), rupture is seen to occur at the same site; the only difference is that many necks are observed in the latter case.When the energy of the 7.4-µs-long pulse exceeds the threshold value, fragmentation becomes more intense. When the foil sample is loaded by the 1.0-µs-long pulse (Fig. 2e), fragmentation is accompanied by necking. The measured data for the ductile component of fracture are given in Table 2.The notation in Table 2 is the following: T is the pulse width, S is the shear area at the fracture surface.As the pulse gets shorter, the fraction of fibers in fracture decreases monotonically; that is, the samples embrittle. For comparison, Table 2 gives experimental values of T and S for aluminum loaded by an air-gas impact machine with an intermediate pulse width of 1.3 µm.The results of fracture surface examination are given in Table 2.In the case of the long pulse, fracture is seen to be mostly ductile; that is, the samples are more ductile than for the short pulse.The shortening of the current period the share area on the fracture surface decreases monotonically, i.e. samples become more brittle. The optical micrographs demonstrate that the long pulse causes almost fully ductile fracture.In the case 02014-p.3 EPJ Web of Conferences of the short pulse, the shear area decreases: the fracture surface exhibits the signs of both ductile cup fracture and brittle crystalline fracture with cracks, which are sometimes rather deep.In addition, the short pulse results in twinning (Fig. 3), which seems surprising for aluminum alloys featuring a high stacking fault energy.Twinning is a typical deformation mechanism at fast loading of alloys with a low or medium stacking fault energy.As for materials with a high stacking fault energy, they twin only under a high load (above the twinning threshold) or at a very high rate of loading. Characteristics of the samples and results are summarized in Table 3, where T -the period of oscillation of the current in the coil; d -ring diameter; S = h × bcross-section of the sample; D -grain size; n -number of pores in the area of 400 µm 2 ; HV -microhardness. Figure 4 shows the structure of aluminum samples at initial state and after loading with different periods and different cross-section (scale factor). It is seen the dynamic recrystallization -the formation of new small grains for short-time loading.The highest degree of dynamic recrystallization is in the samples of aluminum with longer duration of loading and a maximum cross-section (scale factor).Aluminum samples after loading showed a greater tendency to porosity with increasing loading period compared with baseline. Porosity in copper after magnetic pulse loading was below than in the initial state.In addition, with increasing loading duration the samples had the generation of multiple cracks. Figure 5 shows the structure of copper in the initial state and after the magnetic pulse loading. The copper sample before loading was a single crystal, and after loading as a result of dynamic recrystallization small grains appear with the size -0.7-2.5 µm.And the more advanced dynamic recrystallization is in a sample with a higher period of the current -smaller grains are here.Furthermore, in the sample with a notch there are large amount of shear bands. Conclusions 1.It was developed and tested a magnetic pulse method with simultaneous photographic registration of the moment of fracture metallic ring samples with short (about 1 µs) pulse tensile influences. 2. It was found that shortening the exposure time leads to the fracture of the samples.Samples become more brittle.Dynamic recrystallization processes and the formation of cracks observed in the aluminum samples, whereas twinning and shear bands formation occur in copper. Figure 2 . Figure 2. View of the sample: (a) before loading; (b) fracture at a threshold energy, T = 7.4 µs; (c) fracture at a threshold energy, T = 1 µs; (d) fracture at an energy far exceeding the threshold, T = 7.4 µs; and (e) fracture of the sample made of aluminum foil 0.03 mm thick, T = 1 µs [1]. Figure 3 . Figure 3. Fracture surface of the sample subjected to 1-µm-wide loading pulses: S -shear area and T -twins. Table 2 . Percentage of fibers on the fracture surface of aluminum. Table 3 . Font styles for a reference to a journal article.
2,272.2
2015-09-01T00:00:00.000
[ "Materials Science" ]
Fragmentation Behavior Studies of Chalcones Employing Direct Analysis in Real Time ( DART ) Chalcones are naturally occurring, biologically active molecules generating interest from a wide range of research applications including synthetic methodology development, biological activity investigation and studying fragmentation patterns. In this article, a series of chalcones has been synthesized and their fragmentation behavior was studied using modern ambient ionization technique Direct Analysis in Real Time (DART). DART ion source connected with an ion trap mass spectrometer was used for the fragmentation of various substituted chalcones. The chalcones were introduced to the DART source using a glass capillary without sample preparation step. All the chalcones showed prominent molecular ion peaks [M]•+ corresponding to the structures. Multistage mass spectral data MSn (MS2 and MS3) were collected for all the chalcones studied. The chalcones with substitutions at 3, 4 or 5 positions gave product ion peaks with the loss of a phenyl radical (Ph•) by radical initiated α-cleavage, while substitution at 2 position of chalcone in the A-ring gave a product ion peak with the loss of substituted styryl radical (PhCH = CH•). In case of the chalcones with the substituent at 4 positions in A and B rings gave both types of fragmentation patterns. In conclusion, chalcones can be easily characterized using modern DART interface in very short time and efficiently without any cumbersome sample pretreatment. Introduction Chalcones are a large group of phytochemicals that have the general structure of a 15-carbon skeleton, which consists of two phenyl rings attached to the 1 and 3 positions of the 2-propen-1-one moiety with the IUPAC name 1,3-diphenyl-2-propen-1-one (1a).Chalcones are the main precursors for the biosynthesis of flavonoids 1 in plants as well as in synthetic organic chemistry.Although flavones were synthesized by the dehydration of chalcones almost a century ago, chalcones are still considered as a subclass of flavonoids.Chalcones are generally synthesized in the presence of methanolic or ethanolic sodium or potassium hydroxide.Various reagents/conditions have been employed for the procedure in the past decade. Recent studies on biological evaluation of chalcones revealed that some of them possess in vitro and/or in vivo activity as antimalarial, antituberculosis, antitumor, antileishmanial, anti-inflammatory, antimitotic, nitric oxide inhibitors, antihyperglycemic, NADH: ubiquinone oxidoreductase inhibitors, hyperglycemic, glucosidase inhibitor, Antidiabetic, antiulcerogenic and as phase II metabolic enzymes inducers, etc. Because of the intriguing structure and importance of chalcone a number of research group have been involved in structural characterizations and/or studying their fragmentation patterns in the last several decades using mass spectrometry.Recently, Zhang et al. reported the characterization and isomer differentiation of chalcones using electrospray ionization (ESI) tandem mass spectrometry. 2ost studies have been carried out using ESI, 3 with other reports on chemical ionization, 4,5 field desorption 6 and fast atom bombardment. 7,8ecently, DART has been used for the chemical profiling of the different landraces of Piper betle leaves, 9 for quality and authenticity assessment of olive oil, 10 in the detection of breast cancer, 11 screening for pesticides, 4 serum metabolomic fingerprinting, 12 multiple mycotoxins in cereals, 13 in the analysis of purified pharmaceutical preparation using thin-layer chromatography, 14 and most recently in the analysis of phthalates added to food and neutraceutical products. 15The technique has also been recently reviewed. 16,17To the best of our knowledge, ambient ionization technique DART has not been used as the ion source for the mass spectral analysis of organic small molecules.In this study, we have synthesized a series of chalcone derivatives and studied their fragmentation pattern using DART ion source in an ion trap mass spectrometer.DART is fast, simple, but can detect trace chemicals in complex matrices, using minimal or no sample preparation. Chemical and Reagents Benzaldehyde and acetophenone derivatives were purchased from Alfa Aesar and Sigma-Aldrich and were used without further purification.Reagents and solvents were analytical grade and used without distillation. Mass Spectrometer All experiments were performed using an ion trap mass spectrometer from Agilent technologies with an IonSense (Saugus, MA) Direct Analysis in Real Time (DART) ionization source.For the DART source, helium gas was used at a flow rate of 4 L/min.The gas heater and capillary voltage of the DART source were set to 350 o C and 4000 V, respectively.The distance between the outlet of the DART and the inlet of the orifice of the mass spectrometer was set 1 cm.Sample introduction was accomplished by slowly moving the closed end of a glass capillary, which was dipped into powdered analytes so that sample was carried across the helium gas stream between the DART source and the orifice of the mass spectrometer.The spectra recording interval was 0.5 s.Fragmentation was carried out with the following parameters: trap drive 78, ICC on, accumulation time 200,000 s, and fragmentation amplitude was 1 volt. Results and Discussions Chalcones were synthesized using a mixture of (substituted)-benzaldehyde and (substituted)-acetophenone in ethanol was added alcoholic sodium hydroxide (10%) and was stirred for 2-3 hours and obtained quantitative yields of the corresponding chalcones.The solid chalcones were introduced into the DART source without any other sample preparation.The chalcones tested gave molecular ion peak [M] •+ and are summarized in Table 1.For example, chalcone 1a (without substitutions in the A or B rings) having molecular weight 208.15 (nominal mass 208) gave molecular ion peak at m/z = 208 as [M] •+ (Table 1), where 1d having molecular weight 272.73 (nominal mass 272) gave at m/z = 273 in the same way. Multistage mass spectral data (MS 2 and MS 3 ) were taken for all the chalcones synthesized and are also summarized in Table 1.MS 2 of chalcone 1a gave m/z 129.7 (100%) as a metastable ion peak with the loss of phenyl radical (Scheme 1) by α-cleavage mechanism.The proposed mechanism and fragmentation pattern shows interesting contrast to the reported fragmentation pattern for chalcones using ESI tandem mass spectrometry. 2roduct ion peaks together with their corresponding proposed structures obtained from MS 2 for all the chalcones 1a-f are shown in Figure (1A-1F).For all the cases, chalcones 1a-f gave product ion peaks with the loss With substitution at position 2 of the A-ring of chalcones 1g-i, the MS 2 fragmentation gave m/z 135 (100%) for all the entries as a product ion peak with the loss of substituted styryl radical (Scheme 2) followed by MS 3 at m/z 95 (100%). As shown in Scheme 2, all three compounds (1g-i) have shown the product ion peak at m/z 135 (100%).In addition, dichloro substituted 1i gave an additional peak at m/z 199 (9%) with the loss of 2-methoxyphenyl radical group.The metastable ion peak for the chalcones 1g-i and their corresponding structures are shown in Figure (1G-1I). The effect of the substituent was observed in DART while examining the chloro-substituted chalcone (1j) at 4position of the A-ring (Scheme 3), two peaks corresponding to the product ions were obtained, one of which was at m/z 139 (40%) with the loss of chlorostyryl radical while the other gave m/z 165 (100%) with the loss of chlorophenyl radical with the ratio of 2:5.On the other hand, methoxy group at 4-position (1k) of the B-ring also gave two product ion peaks at m/z 139 (100%) with the loss of methoxystyryl radical and at m/z 161 (42%) with the loss of chlorophenyl radical as well (Figure 1J and 1K) with the ratio of 5:2. In addition, MS 2 of 1j gave m/z 258 (6%), 241 (15%) with the loss of water (H 2 O) and chloride (Cl), respectively.MS 2 of 1k also gave similar fragmentation pattern m/z 255 (18%), 237 (5%) with the loss of water (H 2 O) and chloride (Cl), respectively (Figure 1J and 1K).MS 3 of the product ion peak of 1j (m/z 165) gave m/z 137 (100%) with the loss of carbon monoxide (CO) radical and the MS 3 of the product ion peak of 1k did not give any fragment. Conclusions The fragmentation behaviors of Chalcones were studied using very convenient ambient ionization technique DART in combination with a mass spectrometer and found all the chalcones gave similar fragmentation pattern with the molecular ion peak as [M] •+ .This method is easy, convenient, time saving and most importantly the samples can be analyzed at their natural states without sample preparation.Substitute in A ring at 2' position gives product ion peak with A ring; Substitute in B ring at any positions gives product ion peak with B ring; Substitute in both A and B rings at 4 positions gives two product ion peak with A and B rings, respectively.Taken together, we can conclude that, chalcones can be easily characterized using modern DART ion source in very short time and efficiently. Scheme 1. MS n pattern for chalcone (1a-f) (substituted at R 2 or R 3 position in A-ring) using DART: proposed mechanism for the fragmentation pattern by alpha-cleavage pathway.
2,034.2
2013-06-01T00:00:00.000
[ "Chemistry" ]
Sustainable regional social protection system The purpose of this article is to develop directions for transforming the regional social protection system. The regions with a large imbalance in the population structure are very vulnerable. The study focuses on the analysis of the Yamal-Nenets Autonomous Okrug (YNAO), as it differs in a specific labor market with a large number of shift workers. The result of the study was the identification of priority areas for the regional social protection system development. The analyzed foreign experience also showed the need to involve retirees in the labor activity. Introduction Social policy is the main and most significant area in the implementation of state functions. Economy development, regulation of market relations, stimulation of economic growthall these actions of the state only create conditions for improving the welfare of the population and developing social infrastructure [1]. Insufficiently high rates of economic growth in the Russian Federation after the crisis of 2014-2015 have exacerbated the most acute social problems. Currently, the Russian Federation has quite low unemployment rates as well as inflation is relatively low compared to previous periods. At the same time, there is a large number of socially unprotected citizens. The complexity of the situation lies in the fact that in Russia the qualitative composition of socially vulnerable groups is completely different than in most other countries. So, while abroad the majority of socially unprotected segments of the population are represented by marginal elements, in Russia such groups are mostly represented by retirees, stagnant unemployed, low-skilled workers, and dysfunctional families. In Russia, there are no such concepts as 'ghetto' or 'marginal areas' [2]. Thus, the problem of supporting the socially vulnerable groups of the population is a nationwide task that concerns the entire society. The role of regional social protection authorities in this situation is significant. Although the country's pension system is centralized, many social protection functions have been transferred to the regional level. Moreover, when there is a need for concrete actions by the social protection authorities, in direct contact with the population, the importance of the regional level of government grows. The theme of population social protection is quite widely represented in the Russian scientific literature. However, most of the work focuses on the study of the issues of social policy as such. The regional aspect often remains outside the focus of attention of researchers. Thus, the relevance of the article is determined by the insufficient knowledge of the issues of population social protection at the regional level, considering local specifics. The social policy in the Russian Federation is structured in such a way that the powers of the local government, the regional government and the federal center are strictly delimited. With a certain degree of conditionality, it can be said that the rights enshrined in the country's Constitution assigned to the federal center. The implementation of other social guarantees is the prerogative of local authorities. In view of the fact that the federal level governs the most powerful instruments of social policy, it plays the most important role in distribution of financial resources. This has a number of implications for the regional level [3]. Thus, the pension system in the country and the functioning of the social infrastructure system are directly dependent on the macroeconomic situation, the costs of the federal budget. That is why regional differences in the level of social protection are not so great, even despite the difference in economic development of regions [4]. Social policy in the regions is always aimed at considering the factors of social processes formation specific for each region. Accordingly, this policy provides the introduction and implementation of regional social guarantees that extend or supplement federal guarantees [5]. Moreover, the regulation of social processes in individual territories and organizations will lead to the formation of a sustainable development of the region as a whole. In order to create a unified system of regional social processes regulation, it is necessary to coordinate the actions of all local authorities, as well as coordinate the efforts of organizations in the social protection of their employees. The result of such efforts will be to ensure social stability, which is the main task of social policy. The regional level of the social protection system is identified with the responsibility of local authorities. However, at its core, the social protection system in different regions basically will be similar. It is important to note that any regional system of social protection in the Russian Federation is a component of the unified state system of social protection of the population. The nationwide model is built in such a way that its most important components are integrated at the state level. For example, the Pension Fund of the Russian Federation is one of such unified structures. In addition, the federal legislation is a fundament for the social sphere, the decisions of regional assemblies can only supplement it [6]. Methods To identify the features of the social protection system, it is necessary to consider the structure of its formation. Using the example of a real region is more convenient to do this. However, large differences in the structure of regional economies, the size of budget allocations for social protection and the level of incomes in regions prevent generalizing the results for the whole country. Especially vulnerable are those regions where there is a large imbalance in the structure of the population. The social policy of such regions requires very specific measures. For the purposes of this study, the Yamalo-Nenets Autonomous Okrug (YNAO) was chosen as an object. The YNAO's labor market is completely different from the whole country. So, in view of the fact that the main employers in the region are mining industries, many employees work on a rotational basis (shifted work) or move to the region temporarily. Efforts to maintain a balanced social protection system depend on:  the number of people in working age who makes tax deductions, This analysis is based on statistical data and gives an accurate idea of the current state of the system. It is necessary to evaluate the effectiveness of existing measures taking into account risks in order to develop directions for transforming the regional social protection system. Results The YNAO is a federal subject of the Russian Federation, which is the part of the Tyumen Region. On 01-Jan-2019, the total population was 0.54 million people [7]. 76.1% of the population are of working age (the average for Russia is 61.9%). The demographic situation in the district is characterized by a steady natural population growth associated with the specific structure of the regional economy and a large number of temporary residents. The basis of the economy of the YNAO is oil and gas production. Oil and gas condensate are produced by more than 30 enterprises, which attract workers on a rotational basis (shifted work). On 01-Jan-2019, the average salary in the region amounted to 86,560 rubles (3rd place in the Russian Federation) [8]. The number of unemployed citizens in the region is significantly lower than the corresponding indicator for the country as a whole and is only 0.6% (the average for the country is 4.9%). A distinctive feature of the YNAO is the age structure of the unemployed: more than half of the unemployed in the region are representatives of the age group up to 30 years, which is significantly different from the national ratio. Unlike the most other regions of the country, there are practically no unemployed people in the age group of 60 years, the unemployment is also low in the group of 50-60 years, which indicates the specificity in the organization of the labor market in the region (Table 1). At the same time, the situation with the number of socially unprotected citizens is in a special way. Upon reaching retirement age, people working on a rotational basis often leave region and move to their home places. This makes the share of retirees in the YNAO significantly lower than in the country ( Table 2). In addition, in the YNAO, the share of retirees in the total amount of social benefits is lower due to different types of benefits and social assistance. The reason are an active social responsibility policy among local enterprises and municipal authorities. Also plays role the fact that in the conditions of the North, many segments of the population have needs for additional protection from local authorities (Table 3). An important factor that has a significant impact on the content of social protection policies in the YNAO is the remoteness of the region from large cities. Because of this, as well as the high cost of living, poor citizens in the region find themselves in a much more difficult situation than people with the same income in other regions of the country. Living cost for them is much higher, and the remoteness of the region from the densely populated parts of the country prevent them to change their place of residence quickly. In addition, they are distant from their relatives, who could assist them. Part of people who has come to the YNAO to earn money, do not have any social contacts and might be in a very difficult situation. The same can be said about retirees with low pensions who did not leave the region after reaching retirement age. In other regions of Russia, as a rule, a person is aging in the social environment where he lived all his life. He lives with his family, may have some kind of property that has traditionally belonged to his family (country house or cottage). All this can be a great help upon retirement. Under the conditions of the YNAO, a person most often does not have this. Thus, if a pensioner does not receive a sufficiently big pension, he finds himself in an extremely difficult situation: there is not enough pension for living, and there are no additional sources of income, like any other support. Discussion In the YNAO, social protection policies are focused on the following areas: 1) social protection of children, childhood and teenage; 2) social protection of the working people; 3) social protection of disabled people; 4) and social protection of the family. The successful indicators of the socio-economic development of the YNAO, its leadership in the general list of regions, the smaller scale of social problems do not mean that the system of social protection of the population does not require improvements. The district, like all other regions of the country, faced a number of strategic challenges. Nowadays, their influence on the overall situation in the region is not so great, but in the future, they can turn into systemic problems. These challenges include:  an increase in life expectancy, an increase in the share of retirees, and, consequently, an increase in the burden on the Pension Fund;  changes in the labor market and changes in the technological foundations of the organization of labor: as a result of the constant automation and computerization of production processes, we should expect further release of the workforce and exacerbation of the unemployment problem;  and permanent existence of economic risks: the possibility of a worsening situation in the economy as a whole will lead to a decrease in the state's ability to implement social protection policy [9]. In the YNAO, as already noted, despite a relatively small proportion of citizens needs social support, almost 80% of the regional budget expenditures are social items. It is obvious that the social protection system first of all needs to increase the effectiveness of the funds spent, as well as a certain reorganization, taking into account all the challenges that it faces. The key directions of the social protection system transformation in the YNAO can be represented as follows. Development of regulatory support. Fast changes in social protection and transience of the situation requires constantly refining of the regulatory documents. In particular, there is a need to delimit the powers and responsibilities of federal and local authorities. Improving the targeting of social services. The massive provision of social assistance leads to the fact that part of it spent inefficiently. The targeting principle will help to spend resources on assisting those who really need it. Engagement of public organizations. The main functions of social protection are assigned to the federal and regional social protection structures. At the same time, the public movement, volunteer organizations in the Russian Federation have already sufficiently developed to perform a number of functions, at least, organizational and informational. 'Poverty Prevention'. The main strategic goal of social protection should be not so much the counteraction to negative social phenomena as the creation of conditions for such phenomena not to expand. That is, the emphasis should be placed not only and not so much on social assistance, as on the prevention of negative phenomena. Foreign research [10][11][12] shows that, with a general increase in life expectancy, it is necessary to direct efforts to involve retirees in labor activity. An aspect of this direction of social policy includes an important educational element that raises a whole layer of problems:  lifelong learning,  advanced training and retraining,  and obtaining a second / third profession in adulthood. All this can help retirees to remain in demand on the labor market longer and, accordingly, increase the budget revenues due to deductions from the wage fund and the income of individuals. The inability of the region to maintain control over the social situation in its territory can generate serious long-term threats. Management efficiency is especially weakened in out-of-center sparsely populated and rapidly degrading settlements. It is necessary to create a strong legal framework that provides the conditions for the sustainable development of the region through a more complete and balanced use of domestic resources [12]. Conclusion At the regional level, the main burden is on supporting the necessary infrastructure, as well as organizing direct work with socially vulnerable citizens. This is the main feature of the regional social protection system. In regions such as the YNAO, regional authorities have an additional burden due to the specifics of economic and social life. Despite the well-being of the majority of the population, the remoteness of the region, the high cost of living and the high cost of maintaining infrastructure facilities make the task of supporting socially vulnerable people much more difficult and costly than in other regions. All this creates additional risks for the regional level of governance and determines the need to apply additional measures to shape the sustainable development of the social sphere in the region in the search for additional sources of funding and participants of the social protection system of population.
3,417.4
2020-01-01T00:00:00.000
[ "Economics" ]
A 2-Component Laplace Mixture Model: Properties and Parametric Estimations Mixture distributions have received considerable attention in life applications. This paper presents a finite Laplace mixture model with two components. We discuss the model properties and derive the parameters estimations using the method of moments and maximum likelihood estimation. We study the relationship between the parameters and the shape of the proposed distribution. The simulation study discusses the effectiveness of parameters estimations of Laplace mixture distribution. Introduction Laplace distribution has wide applications in various fields such as engineering, business, medicine, and others. It is also known as a double exponential distribution because it is considered as two exponential distributions with additional location parameter. It considers as a member of lifetime distributions. Mixture distributions and the problem of mixture decomposition about the identification of the constituent components and parameters dates back to 1846, but most of the reference is made due to the work of Karl Pearson in 1894 [7]. The approach taken by Pearson was to fit a univariate mixture of two normal to data through the choice of five parameters of the mixture in a way that empirical moments matched the model. The work by Pearson was successful in identifying two distinct sub-populations and also showing the flexibility of mixtures as a moment matching tool. Other later works focused on addressing the problems, but the invention of the modern computer and the popularization of the maximum likelihood parameterization techniques caused a stir in research work on mixture models [6]. In 2002, Figueiredo and Jain applied the finite mixture to unsupervised learning models and gave important insights into mixture models [3]. Bhowmick et al introduced the Laplace mixture model instead of the Gaussian mixture model due to the tail length and a weight of the Laplace distribution, then applied the mixture micro-experiments [2]. Ali and Nadarajah found the information matrices for the Gaussian distribution mixture and the Laplace distribution mixture [5]. A mixture of asymmetric Laplace and Gaussian distributions was estimated using the EM algorithm by Shenoy and Gorinevsky [11]. Amini-Seresht and Zhang provided random comparisons of two finite-mixture models with different mixing ratios and independent variables [1]. Most of the references presented comprehensive studies and applications of finite mixture models which is based on the work of McLachlan,and Peel [6]. Ramana et al., introduced the two-component mixture of Laplace and Laplace type bimodal distributions, and they find the properties and estimation for the common parameters between the two-component mixture [9]. The previous studies are based on equal mixing parameters or constant scale parameters. The aim of this paper is to study the two components Laplace mixture model and estimate its parameters in the case of unknown all model parameters using parametric estimation methods. The paper is organized as follows: Section 2 presents the definition of Laplace mixture distribution. Section 3 discusses the distribution function. The properties of the proposed distribution are studied in Section 4. The estimation methods are presented in Section 5. The simulation studies are presented in Section 6. Finally, conclusions are drawn in Section 7. Mixture of a Two Laplace Distribution Let a random variable, the overall formula of the distribution of Laplace is where −∞ < < ∞, −∞ < < ∞, and > 0. The Cumulative Density Function (CDF) We state that the given pdf is a density function by computing the integral of the mixture distribution over its range, then we have The cumulative distribution function (CDF) of defines as: The Properties of Laplace Mixture Distribution In what follows, the distribution properties are studied by obtaining the mean, the mode, the median, and the variance of A2-CLPM distribution.  The mean of the random variable is  The mode of the random variable given from when derived the equation (4) were ≥ or < , we get the same mode for both of them, the formula are following as: The median of this distribution can be obtained as because of symmetry, we can suppose that The variance and standard deviation of the mixture distribution. The variance is defined as: Then, we find ( 2 ), as following as: Then, substitute Eq (7) and Eq (13) in Eq (12), we get The stander deviation of the random variable is the square root of the variance. Tables (1-5) present the properties of three samples of size 1000 taken from Laplace mixture distribution, we compute the mean, mode, median, variance, skewness and kurtosis with different parameters values for Laplace mixture distribution. For simplicity, we take special cases of mixture distribution in order to study the performance of the Laplace mixture distribution. In Table 2, we state the performance of scale parameters by assuming that location parameters 1 = 0, 2 = 2 , α 1 = 0.3 and λ 2 = 2. As expected, the simulation results illustrate that the Laplace mixture distribution has a positive skewness and a heavy tail which provides a good fitted model for life applications where outliers have located in the right tail of the mixture curve. As also as, the Laplace mixture distribution has a positive kurtosis equals 3.317 which indicates that the curve approximately has normal curve. For Table 2, we compute the distribution properties with different values of second scale parameter λ 2 by setting other parameters by 1 = 3, 2 = 5, α 1 = 0.4 and λ 1 = 0.5. The third case studies the properties of the proposed mixture model with different values of mixing parameter α 1 = 0.2, 0.5 and 0.9 , we assume 1 = 0, 2 = 5 , λ 1 = 1, and λ 2 = 2. The results are presented in Table 3, which illustrates equal values of mean and median. At α 1 = 0.2, we observe that kurtosis equals 3.25 which indicates that the curve approximately has normal curve. In contrast, when α 1 = 0.5 and 0.9 the curve has heavy tail. Table 4 displays the results of studying three samples from Laplace mixture distribution with parameters values α 1 = 0.6, 2 = 3, λ 1 = 4, and λ 2 = 6. In this case we use different values of first location parameter 1 = (1, 2, 4). We observe that the three distributions have skewness = 0.5 or 0.6, and kurtosis ≅ 2.1 which means that the curves are close to the normal distribution curve. The results of final case is presented in Table 5, it also draws the same conclusions that when 1 = 5, λ 1 = 1, λ 2 = 5 andα 1 = 0.2 , while the vector of second location number equals to 2 = (1, 2, 3). Parametric Estimation Methods In this section, we obtain the parameter estimates of Laplace mixture distribution using two parametric estimation methods: method of moments (MME) and maximum likelihood estimation (MLE) method. The illustrations are presented in the following subsections. The Maximum Likelihood Estimation (MLE) The maximum likelihood estimation method (MLE) is used to estimate the parameters of the distribution. Now, let = ( 1 , 2 , . . . . , ) is a random sample then the likelihood function of a given distribution is defined as: The maximum likelihood estimates for θ is calculated by finding a value of θ that maximizes log-likelihood function. i.e. For Laplace mixture distribution, define ( ) = ( 1 , 1 , 2 | ), where the location parameters are known, and equals Next, to find the maximization of log-Likelihood we need to find the derivatives of log-likelihood function with respect to distribution parameters see [10], then we have The next step is to set the equation (34), equation (35), and equation (36) to zero. Then, solve the system for the three parameters α 1 , 1 and 2 , this step obtains the MLE of α 1 , 1 and 2 . In practical, the parameter estimations via MLE method are obtained numerically using Newton-Raphson method by providing initial values for parameters. This is done through the implementation of method using R software Simulation Study We study the effectiveness of Laplace mixture model by providing two scenarios. Firstly, we obtain the MLE estimations for a two components Laplace mixture distribution, and for simplicity, we assume that the location parameters 1 = 0 and 2 = 2. We study two case studies as illustrated in what follows: Firstly: Random samples of different sizes 50, 100, 200, 500, 1000, and 1500 are drawn from Laplace mixture distribution with scale parameter equal λ 1 = λ 2 = 1, and the mixing parameter equals α 1 = 0.5 . The model parameters are estimated using MLE method, the results are shown in Table 6. This model is iterated 100 times to conduct the consistent of model parameter estimates over 100 iteration, the results are summarized in Table 7 which discussed the properties of each parameter estimates where the location parameters are known. The estimate properties are studied by obtaining the bias, the variance and the mean squared error (MSE) of parameter estimates, we also compute the root mean squared error of parameter estimates. The results show the reasonable MLE estimates for α 1 , λ 1 and λ 2 and all estimates have good values of Bias, MSE and RMSE. The histogram of this simulation data is displayed in Figure 2. Secondly: We generate a random sample of 1000 data points from Laplace mixture distribution with the same previous assumptions (α 1,2 = 0.5, 1 = 0 and 2 = 2) where scale parameters equal λ 1 = 1 and λ 2 = 2. Table 7. The properties of MLE estimated Laplace mixture parameters for the simulated, where α 1 = α 2 = 0.5, µ 1 = 0, µ 2 = 2 and λ 1 = λ 2 = 1 Conclusions This paper proposed Laplace mixture distribution with two components. The properties of the proposed mixture model are discussed theoretically. Moreover, the parameters are estimated using method of moments and maximum likelihood estimations method. The simulation study has been indicated that the model parameter estimates provide reasonable results and close to true values of model parameters. For future work, one can apply semi-parametric methods to derive parameter estimations for Laplace mixture distribution with components.
2,394
2019-09-01T00:00:00.000
[ "Mathematics" ]
Quantum gravity kinematics from extended TQFTs We show how extended topological quantum field theories (TQFTs) can be used to obtain a kinematical setup for quantum gravity, i.e. a kinematical Hilbert space together with a representation of the observable algebra including operators of quantum geometry. In particular, we consider the holonomy-flux algebra of (2+1)-dimensional Euclidean loop quantum gravity, and construct a new representation of this algebra that incorporates a positive cosmological constant. The vacuum state underlying our representation is defined by the Turaev-Viro TQFT. We therefore construct here a generalization, or more precisely a quantum deformation at root of unity, of the previously-introduced SU(2) BF representation. The extended Turaev-Viro TQFT provides a description of the excitations on top of the vacuum, which are essential to allow for a representation of the holonomies and fluxes. These excitations agree with the ones induced by massive and spinning particles, and therefore the framework presented here allows automatically for a description of the coupling of such matter to (2+1)-dimensional gravity with a cosmological constant. The new representation presents a number of advantages over the representations which exist so far. It possesses a very useful finiteness property which guarantees the discreteness of spectra for a wide class of quantum (intrinsic and extrinsic) geometrical operators. The notion of basic excitations leads to a fusion basis which offers exciting possibilities for constructing states with interesting global properties. The work presented here showcases how the framework of extended TQFTs can help design new representations and understand the associated notion of basic excitations. This is essential for the construction of the dynamics of quantum gravity, and will enable the study of possible phases of spin foam models and group field theories from a new perspective. One of the key conceptual lessons of Einstein's general relativity is that gravity is encoded in the very geometry of spacetime. This suggests in turn that quantum gravity might be realized as a theory of quantum geometry. This concept is a cornerstone of loop quantum gravity (LQG hereafter). LQG, which comes in complementary canonical and covariant formulations [1][2][3][4][5][6][7], provides a quantization of geometrical observables (more precisely the spatial metric and the extrinsic curvature of the spatial hypersurfaces), and in this sense defines a realization of quantum geometry. The geometrical observables are encoded in holonomies of an SU(2) connection (called the Ashtekar-Barbero connection [8][9][10][11]) and canonically-conjugated flux variables. Therefore, realizations of quantum geometry can be understood as being given by representations of the holonomy-flux algebra formed by these variables. Several such realizations of quantum geometry are now known. Each realization is based on and can be characterized by a different kinematical vacuum state. So far, all representations have in common that this vacuum is totally squeezed, i.e. sharply peaked either on the flux operators or on the holonomy operators. The operators of quantum geometry act on this vacuum state, and generate thereby all the excitations of the Hilbert space underlying the representation of the holonomy-flux algebra. The vacuum is therefore cyclic with respect to the set of holonomy and flux operators. The Ashtekar-Lewandowski (AL) representation [12][13][14][15] was the first one to be constructed and, importantly, it allows to take into account invariance under spatial diffeomorphisms: The vacuum underlying this representation is diffeomorphism-invariant, which allows the construction of a (spatially) diffeomorphism-invariant Hilbert space. In the AL representation the vacuum is a totally squeezed state sharply peaked on vanishing expectation values of the fluxes, whereas the holonomy variables have maximal uncertainty. The AL vacuum state therefore describes a totally degenerate spatial geometry. This renders the construction of states describing large scale geometries very cumbersome. This difficulty motivated Koslowski and Sahlmann to introduce a variant of the AL representation, which essentially amounts to adding to the action of the fluxes a term describing a background flux (field) E [16][17][18]. By doing so, the vacuum state is then sharply peaked on this background flux field, while the holonomies are still maximally uncertain. Although this vacuum is not (spatially) diffeomorphism-invariant anymore, an action of diffeomorphisms can still be defined [19][20][21]. An alternative path has recently been pointed out in [22]. It consists in using a (classical) canonical transformation affecting only the fluxes by adding to them a curvature-dependent term multiplied by an arbitrary constant θ [23]. By doing so, the new flux variables and the Ashtekar-Barbero connection still form a canonically-conjugated pair of variables, and one can therefore consider e.g. the usual AL representation based on these new variables. The corresponding AL vacuum state is then given by vanishing expectation values in the new fluxes, which, due to the θ shift, describes a geometry with certain symmetry properties 1 . A disadvantage of this proposal is however that the basic (spatial) geometrical operators, like the area or the volume, cannot anymore be expressed easily in terms of the new fluxes. A third possibility, introduced by the authors in [24], constructs a new representation of quantum geometry by dualizing many ingredients of the AL representation. The construction has been completed in [25], and in [26] with Bahr. This representation is based on a kinematical vacuum state which is sharply peaked on flat connections, whereas the fluxes have maximal uncertainty 2 . Since the space of flat connections has a much richer structure than the space of vanishing fluxes, certain details of the construction (in particular the definition of cylindrically consistent gauge covariant flux observables) are much more involved 3 than in the AL case (where one usually works with fluxes which are not gauge covariant). The vacuum state is also invariant under spatial diffeomorphisms, and one can therefore achieve a diffeomorphism-invariant representation. This representation is better suited for semi-classical constructions, and in particular for connecting canonical loop quantum gravity to spin foams. The reason is that spin foams are based on BF theory [30], a topological field theory whose physical states are also states with (locally) flat connections. For this reason, we refer to the representation worked out in [24][25][26] as the "BF representation". The fact that there are now several available realization of quantum geometry raises the question of whether there exist further possible generalizations. It was actually proposed in [29] to build representations by choosing a suitable topological quantum field theory (TQFT). The physical state (which is unique for a hypersurface with trivial topology) of this TQFT serves as a vacuum state, and the TQFT also determines the possible excitations which the representation can have. These excitations appear in the form of defects. These can be studied in the framework of extended TQFTs, which can broadly be understood as constructing TQFTs with boundaries. A particular class of boundaries, say for a two-dimensional hypersurface, are punctures obtained by removing disks from the hypersurface. These punctures will carry the defect excitations. A state with defect excitations is also a physical state but now, due to the punctures, on a topologically non-trivial hypersurface. So far, we have not mentioned the (holonomy-flux) observable algebra. This is the next question which one has to address: Can one define operators, including creation and annihilation operators for the defect excitations, that realize a representation of the observable algebra? To get a representation of the full holonomy-flux observable algebra (which includes holonomies and fluxes that are allowed to act anywhere), one needs to enlarge the framework of extended TQFTs by a so-called inductive limit. This procedure constructs, out of the Hilbert spaces with a fixed number of defects at fixed positions, a continuum Hilbert space where defects (and thus the associated excitations) can appear in any number and at any position. Both the AL and the BF representations can be understood in these terms. Indeed, in the case of the AL representation, the underlying TQFT is a trivial one imposing vanishing fluxes, and the "defects" describe excitations of spatial geometry. In the case of the BF representation, the underlying TQFT is BF theory, and the defects are given by curvature excitations 4 . Having at hand different representations enlarges hugely our toolbox for describing different regimes of quantum gravity . A key open problem is a construction of the dynamics and the continuum limit of the theory, two issues which are deeply intertwined with each other [31]. To this end, it would be helpful to have a representation which is based on a vacuum state which is as close as possible to a physical state, i.e. a state satisfying the constraints of the theory. The BF representation delivers such a state in (2 + 1) dimensions with a vanishing cosmological constant. This also holds in (3+1) dimensions for a certain operator ordering of the Hamiltonian constraint 5 . However, certain issues remain, as the geometry encoded by this vacuum is rather a generalized (so-called) twisted one [32][33][34][35]. We are thus naturally led to the question of whether there are exponentiated fluxes. 3 The AL Hilbert space can be expressed as an L 2 space over an extended configuration space of connections [3]. For the representation [24], one would expect some measure space over an extended configuration space of fluxes. However, the fluxes are inherently non-commutative, which would therefore require a configuration space with a non-commutative multiplication. This has however only been worked out for the AL case so far [27,28]. 4 Torsion excitations are also possible, but in the case of SU(2) they lead to non-normalizable states. 5 Actually, only the so-called Euclidean part of the Hamiltonian constraint. TQFTs leading to a better-suited vacuum state. From this point of view, it is therefore helpful to understand all possible (2 + 1)-and (3 + 1)dimensional TQFTs which admit a geometrical interpretation. A related question is to understand all possible phases in condensed matter, where there is lots of recent progress in (2 + 1) dimensions (see for example [36,37]). One possible way to find TQFTs, in particular with a geometrical interpretation, is to study the coarse graining of spin foams, as the end points of the coarse graining flows give typically topological models [38][39][40][41]. Apart from finding new TQFTs, we can also consider TQFTs which are known, as for example the Turaev-Viro (TV hereafter) model in (2+1) dimensions [42,43] and the Crane-Yetter model in (3+1) dimensions [44,45]. These models can both be understood as so-called quantum deformations of BF theory, where the classical gauge group SU(2) is replaced by its quantum deformation U q su(2) with deformation parameter q a root of unity. The aim of the present paper is to develop a representation of the holonomy-flux algebra based on the TV vacuum. This will in particular show that the general strategy proposed in [29] does work in the example of the TV TQFT, and that this strategy and the techniques outlined in this paper may also be applied to other TQFTs. The TV model describes Euclidean quantum gravity with a positive cosmological constant. There exists a tight relationship between the quantum deformation parameter and the cosmological constant, and a similar role of the quantum deformation is believed to hold also in (3+1) dimensions [46][47][48][49][50][51][52][53]. Another important feature of the quantum deformation at root of unity is that one can expect the Hilbert spaces associated to a fixed graph (which here will be replaced by Hilbert spaces on manifolds with fixed punctures) to be finite-dimensional. This will avoid certain technical inconveniences of the (undeformed) BF representation, in particular the need to resort to a Bohr compactification of the dual of the group SU (2). Furthermore, the TV model in its extended form has been quite recently developed mathematically [54][55][56][57] and has also found widespread applications in condensed matter and quantum computation [58][59][60][61]. In particular, the structure of the excitations for this model is very well understood. For this reason, we will concentrate here on the (2 + 1)-dimensional case, and leave the (3 + 1)-dimensional one for future development. As mentioned above, the (2 + 1)-dimensional TV model describes quantum gravity with a cosmological constant. Therefore, the physical states of the theory are peaked on homogeneous curvature. Hence, one could think that a corresponding representation can be achieved from the BF representation, in which the vacuum is peaked on vanishing curvature, by using a similar construction as when shifting from the AL representation to the KS one. There, one shifts the momenta (given by the fluxes in the AL representation) by a constant term which is encoding the background flux field. However, homogeneous curvature means that the curvature is a functional of the fluxes themselves 6 , and thus we face a much more complicated situation than in the KS construction (where the background flux field cannot depend on the connection). Additionally, we will still have a diffeomorphism-invariant vacuum state, in contrast to the KS representation. Lots of previous work has aimed at constructing the physical Hilbert space, or in other words the vacuum state peaked on homogeneous curvature, starting from the AL representation [62][63][64]. This has not yet been attempted by starting from the BF representation, since this formulation is a very recent one. However, at the classical level, it has been shown that Regge calculus with homogeneously curved building blocks arises from coarse graining Regge calculus with flat building blocks and a cosmological constant term [65,66]. While the derivation of the quantum deformed structure from the canonical quantization (in either the AL or BF representation) of gravity with a cosmological constant is still open, here we will simply assume the quantum group structure from the onset. Since we aim at representing the full holonomy-flux algebra, we will not only consider the vacuum state, but also excitations on top of this vacuum. It turns out that these excitations do agree with the ones that would be induced by coupling particles to gravity 7 . Notice that this does not show that we have coupled matter to gravity. Rather, it turns out that particles lead to the most elementary excitations of quantum geometry, which coincide with the defect excitations in the BF (for the flat case) or TV model. In constructing a realization of quantum geometry based on the TV TQFT, we will rely on a broad range of previous works from condensed matter and mathematical physics. Indeed, the Hilbert space (for a fixed number of punctures) is closely related to so-called string net models [58,59], a mathematical exposition of which can be found in [70]. The (defect) excitations allow for anyonic quantum computation as is explained in [59]. The classification of these excitations goes back to an argument by Ocneanu [71,72], and is discussed in [60] and [70]. Mathematically, this classification relies on the definition of the Drinfeld centre of a fusion category, which has been explored in [73,74]. The relation between the inner product of the Hilbert space and the (extended) TV TQFT is discussed in [57,70]. We are going to see that the holonomy and flux operators appear as ribbon operators. Closed ribbon operators are discussed in [58]. Open ribbon operators are mentioned in [60] and are defined in the context of a fixed (dual) lattice in [61]. However the detailed definition of open ribbon operators via fusion of punctures is only (as far as we are aware of) developed in the present work. Likewise, the entire setup of the inductive limit which is allowing for an arbitrary number of punctures and ensuring cylindrical consistency, is new. The aim of the present paper is to be as self-contained as possible, and we will therefore provide a review of some of the material mentioned in the previous paragraph. We will also provide a quantum geometry interpretation (as far as it is available) of the structures and operators which will appear in this work. We hope that this paper will introduce a number of techniques from condensed matter and extended TQFTs into the (loop) quantum gravity community. In particular, we believe that these techniques can be generalized and might lead to an entire class of new realizations of quantum geometry. Apart from providing an important case study of how to construct a realization of quantum geometry starting from a TQFT, the TV representation possesses a number of advantages, both over the AL and the BF representation. We here list a few of these. • The Hilbert spaces based on a fixed number of punctures (which are analogous to the Hilbert spaces based on fixed triangulations in the BF representation, or the Hilbert spaces based on fixed graphs in the AL representation) are finite-dimensional. Consequently, the spectra of operators which do not introduce new punctures are discrete 8 . This is a major advantage compared to the BF representation where spectra of geometric operators are continuous due to a compactification of the dual of the Lie group which leads to an aperiodic winding of the spectra. This turns a "discrete looking" spectrum into a continuous one (see [26,77,78]). The natural cutoff on the spins provided by the quantum group deformation (at root of unity) also avoids possible (infrared) divergencies. • In the context of constrained quantization, a finite-dimensional Hilbert space offers also many advantages 9 , as the construction of a so-called physical inner product is much simpler in a finite dimensional setup (see e.g. [79] for a discussion). Here, we have however to assume that the constraints can be quantized in such a way that they do not change the underlying discrete structure (here the number of punctures) of the states. This finiteness with quantum group structure is also a huge advantage if one want to consider the coarse graining of the models. First, it allows for a numerical implementation of so-called tensor network algorithms for coarse graining [40,41,80,81]. For this reason, quantum group models are used in [39,82,83]. Second, spin foam models based on the undeformed SU(2) group feature divergencies due to the unbounded summation over spins (see [84] and references therein). These divergencies are avoided if one works with a quantum deformation at root of unity. • In the case of the BF representation, the space of fluxes (corresponding to the dual of the group SU (2)) is compactified. This requires the exponentiation of the fluxes, and with this the introduction of an additional exponentiation parameter (usually called µ). In the fourdimensional case this parameter can be absorbed into the Barbero-Immirzi parameter (see [26]), and thus one still has the same number of additional parameters. In three spacetime dimensions however, one can define the theory without a Barbero-Immirzi parameter, and one has thus a priori (i.e. on the kinematical Hilbert space) an additional parameter, which needs to be fixed via some additional physical principle. In the quantum group case the compactification is provided naturally, and we do not need an additional parameter. In some sense this parameter is rather provided by the cosmological constant, whose inverse represents a cutoff on the admissible spins. • We provide here the first continuum construction of a holonomy-flux representation that incorporates a (positive) cosmological constant. This is based on a vacuum state describing homogeneously curved geometries. A full continuum description has so far not been achieved even with an AL-like vacuum, but will appear soon in [76]. • The framework presented here allows for a very natural identification of "basic excitations". These basic excitations turn out to be described with the same quantum numbers as particles coupled to gravity. Therefore, we have a natural starting point for describing the coupling of point particles (and possibly other types of matter) to gravity with a cosmological constant, which to our knowledge has so far not been discussed in the literature. We will also see that structures such as the Drinfeld double will appear quite naturally here, while they have been derived in a more complicated fashion for the discussion of point particles coupled to gravity without cosmological constant [69,85,86]. • Finally, the framework presented here introduces a new kind of basis, based on the "basic excitations" and a coarse graining of these basic excitations. This is the so-called fusion basis, which offers a range of new possibilities to design states with global properties much more effortlessly than in the spin network basis. This paper is organized as follows: In the next section, we briefly review the most important features of the BF representation and expose, by comparison, the main results of the TV representation developed in the rest of the paper. In section III we present the tools of graphical calculus which will enable us to depict and manipulate the various mathematical structures (such as states, excitations, . . . ) playing a role in our construction. Section IV presents the construction of the so-called graph Hilbert space based on punctured manifolds, together with its basis and inner product, and subsequently the characterization of the vacuum state in the case of spherical topology. In section V, we focus on the two-punctured sphere in order to explain the structure of the basic quasi-particle excitations on top of the vacuum, and introduce explicitly various mathematical notions (such as the half-braiding of the Drinfeld center) which enable the manipulation of these excitations. Section VI is devoted to the study of the three-punctured sphere, which is necessary in order to define the fusion of quasi-particle excitations and in turn the action via fusion of the ribbon creation operators. The detailed properties and the action of open and closed ribbon operators are discussed in section VII, where it is also explained how this forms a representation of the holonomy-flux algebra. Finally, we present the conclusion and some perspectives in section VIII. II. OUTLINE OF BF AND TV REPRESENTATIONS In the first part of this section we shortly review the construction of the BF representation [24][25][26]. This will make easier the understanding of the main similarities and differences with the new TV representation, which we outline in the second part of this section. A. BF representation and flat curvature vacuum The continuum Hilbert spaces of both the AL and the BF representation are built as an inductive limit of Hilbert spaces. This inductive limit construction will also hold for the TV representation. To construct an inductive limit Hilbert space, one considers a family of Hilbert spaces H ∆ labeled (in the case of the BF representation) by embedded triangulations. These triangulations are also equipped with a flagged root (see [25] for a precise definition) and carry a partial order ≺. Furthermore, this partial order is also directed in the sense that a triangulation ∆ is finer than ∆, which we denote by ∆ ≺ ∆ , iff ∆ can be obtained from ∆ by a sequence of refinement operations (specified in [25,26]). Additionally, given two triangulations ∆ and ∆ , one can always find a triangulation ∆ which is a refinement of both ∆ and ∆ . Together with this family of Hilbert spaces, we need embedding maps E ∆∆ : H ∆ → H ∆ which embed isometrically the states in the Hilbert space based on a coarser triangulation into the Hilbert space based on a finer triangulation. These embedding maps need to satisfy a number of consistency relations, the most important one being for any triple ∆ ≺ ∆ ≺ ∆ . This condition implements the fact that the embedding map of a Hilbert space H ∆ into a finer H ∆ does not depend on the number of intermediate steps which one might take in order to arrive at this map. The inductive limit Hilbert space is formally defined as the (closure of the) disjoint union of all Hilbert spaces modulo the following equivalence relation: Two states ψ ∆ and ψ ∆ are equivalent if there exists an embedding map E ∆,∆ such that This means that states are equivalent if they can be made equal under some refinement operation. The embedding maps can be interpreted as (locally) specifying a vacuum state [87]. Indeed, excitations are restricted to occur at the vertices of the triangulation in (2+1) spacetime dimensions or at the edges of the triangulation in (3+1) dimensions. Thus, regions without vertices or edges do not carry any excitations and are (implicitly) in a vacuum state. If a refinement of the triangulation adds vertices or edges to this region, the corresponding refinement map has to result in an equivalent state, i.e. a state which still assigns a (quasi-local) vacuum to this region. Since a refinement adds additional variables, the refinement map has to assign to the additional variables the vacuum configuration, thereby making the vacuum state as a function of these additional variables explicit. This interpretation does not only hold for the BF representation, but also for the AL one where the vacuum corresponds to vanishing spatial geometry. It will also hold for the TV representation which we construct in this work. In fact, this notion can be generalized to a physical vacuum and dynamical embedding maps, which might not necessarily be described by a topological field theory [31,87] (this would be relevant for (3 + 1) or (2 + 1) dimensions when coupling to a matter field). In addition to the inductive limit Hilbert space construction (which as we explained exists for the other representations as well), the BF representation is based on the following specific ingredients: • Hilbert space. The Hilbert space H ∆ associated to a fixed (rooted) triangulation consists of functions of holonomies on the graph Γ (dual to ∆) that are gauge-invariant everywhere except at the root. Such functions can be expressed as functions of holonomies assigned to a basis of independent cycles of Γ (this can be most conveniently achieved by using a gaugefixing along a spanning tree in the dual graph, and then the leaves with respect to the tree define a basis of independent cycles). We denote the states by ψ{g }. To make the abovementioned embedding maps well-defined and isometric, we have to equip the configuration space of holonomies with a discrete topology. This means that the inner product on H ∆ is given by where µ d is the discrete measure. • Global vacuum. The global vacuum ψ ∅ for a fixed H ∆ is given by where δ(g , 1) = 1 iff g = 1, i.e. δ(·, ·) is the Kronecker delta. Note that this state is well-defined and normalized in the discrete topology inner product which we have defined for H ∆ . • Embedding maps. A refinement ∆ → ∆ of a triangulation leads to additional independent cycles { } in the refined dual graph Γ as compared to Γ (with a basis of independent cycles { }). The embedding maps are then given by Therefore, the additional holonomy observables {g } are put into the vacuum state, which is given by the (Kronecker) delta function peaked on vanishing curvature. • Gauge invariance. In the case of the BF representation, one works with states that are gauge-invariant at every node of the dual graph except at the root. The reason for working with such (almost) gauge-invariant states is that due to the discrete topology on the gauge group, gauge averaging leads to non-normalizable states. For the root one can define a group-averaging (rigging map) procedure that is compatible with the embedding maps defined above (see [26] for details). • Representation of holonomy operators. The holonomies act (in the holonomy representation) as multiplication operators. They leave the vacuum invariant and therefore do not lead to (curvature) excitations. In order to preserve gauge invariance, (non-trace) holonomies are only allowed to start and end at the root. • Representation of exponentiated fluxes. In the AL representation, fluxes act as (Lie) derivatives on the holonomies. This action is not well defined with the discrete topology of the BF representation, and there we rather have to work with the exponentiated action of the flux operators. This is in turn the generator of translations, whose action at the level of the group is given by multiplication from the left or right on the (group) argument of the wave function. In order to preserve gauge invariance, we have to parallel transport the (exponentiated) fluxes to the root. For a multiplication from the right we therefore have e.g. where h rs(i) is the holonomy from the root to the source node of the link i. Furthermore, one can compose fluxes along edges of the triangulation (in (2 + 1) dimensions) or triangles of the triangulation (in (3 + 1) dimensions), as described in detail in [25]. Since an exponentiated flux acts by multiplication, it shifts the argument of the vacuum state which is given by the Kronecker delta. This therefore leads to a curvature excitation for the faces that border the link associated to the shifted group argument. In (2 + 1) dimensions, this means that we get (opposite) curvatures associated to the two vertices of the edge dual to the link. A composed flux along a number of edges (giving a so-called co-path) will lead to excitations only at the boundaries of this co-path. In (2 + 1) dimensions, these operators constitute a special case of the so-called ribbon operators introduced by Kitaev in the case of finite groups [88]. In the case of [88] the ribbons also include an action of the holonomy operator. With a finite group, it is less problematic to allow for torsion degrees of freedom (i.e. violations of gauge invariance), and thus to have holonomies starting and ending at arbitrary nodes. Those ribbon operators can lead to both curvature excitations and torsion excitations at the "ends" of the ribbon (where the "ends" of a ribbon now include a vertex of the triangulation, at which one can have curvature excitation, and a node of the dual graph, at which one can have a torsion excitation). • Cylindrical consistency of operators. To a given (holonomy or exponentiated flux) operator O ∆ associated to a fixed triangulation ∆, one can assign a refined operator O ∆ such that the action of the operator intertwines with the refinement map in the sense that This makes the operators well-defined on the inductive limit (and therefore continuum) Hilbert space H ∞ . For the refinement of the flux operators, the notion of composition of fluxes (leading in (2 + 1) dimensions to the notion of ribbons) is essential. Now that we have recalled the essential features of the BF representation, we explain the main results about the TV representation which we construct in the rest of this paper. B. Outline of the TV representation Our goal is to construct a representation of a (possibly deformed) holonomy-flux algebra, with a vacuum state given by the physical state of the TV model [42]. The TV model is based on the fusion category SU(2) k , the objects of which are representations of the quantum group U q su(2) (with q = e 2πi/(k+2) a root of unity), which can in turn be tensored (or "fused") with each other. We will give more details about the properties of this fusion category in section III. Since the quantum group U q su(2) at root of unity does not posses a group representation, we are forced to work in the spin representation. This latter is however much easier to define and manipulate than in the BF case (see [26]), as in contrast to SU(2) there are just finitely many (admissible) representations in SU(2) k . We are going to define Hilbert spaces spanned by a spin network basis, which is represented by a certain class of graphs whose edges are labelled by representations of SU(2) k . Furthermore, we will define operators and equivalence relations on this basis by using a certain graphical calculus introduced in section III. As mentioned above, we will also use for the TV representation an inductive limit construction. For the BF representation, the inductive limit was constructed with the help of a partially ordered set of triangulations. We will proceed differently for the TV representations. Here, the partially ordered set will be given by (spatial) manifolds Σ p where p denotes the number of punctures (we will restrict to spheres in the main text, and comment on the higher genus case in appendix B). The reason is that the excitations are restricted to occur at the punctures. Thus, if we want to represent a more complicated state, defined by having more distributed excitations, we just need to add more punctures. In some sense, the Hilbert space H Σp associated to a fixed position and number of punctures will therefore already be a continuum Hilbert space (it is in fact the continuum Hilbert space associated to the corresponding TQFT on the hypersurface Σ p ). It will however only allow for excitations located at the punctures. Nevertheless, the states will be represented by a combinatorial (discrete) structure, namely the labelled graphs. These graphs will be dual to a triangulation of the manifold with punctures. To reconcile this with the notion of a continuum Hilbert space, we will allow for all possible triangulations and associated dual graphs, as long as these have the same punctures (which are embedded in Σ). Correspondingly, we will define equivalence relations on the states which are nothing else than analogs of refining moves and now also coarsening operations on the triangulation. For this to work, it is also essential that regions of Σ p not being crossed by strands of the graph can be interpreted as being in a (local) vacuum state. Therefore, the inductive limit over triangulations in the BF case will here, in the TV case, split into two parts. The first part is encoded into the equivalences between graph states with the same punctures, and the second part will be encoded in embedding maps that do add further punctures to a manifold Σ p . This brings us to the discussion of the vacuum state and the embedding maps. The vacuum state will again be a state without curvature 10 Later on, this will be translated explicitly into two properties: (i) Wilson loops (in an SU(2) k representation) will act trivially, and (ii) a strand of the graph can be deformed by moving this strand over a puncture with no excitation. This latter property can be understood as being able to deform a Wilson line, which is indeed only possible if there is no (excess) curvature. The vacuum state will also be without torsion 11 . Note that, in contrast to the BF representation, we will allow for the possibility of having excitations with torsion at the punctures (again this is made possible by the "finiteness" of SU(2) k as compared to SU(2)). The embedding maps will be defined in such a way that any new puncture does carry vanishing curvature and torsion. The notion of "vanishing curvature and torsion" will also be specified by constraints which do agree with the constraints imposed by the TV model. Having discussed the vacuum state, we now come to the excitations and to the operators which may generate these excitations. There is a beautiful argument, going back to Ocneanu [71,72], which allows to define the notion of "elementary excitation" associated to a puncture. These excitations are indeed given by the violation of curvature and torsion constraints and turn out to be labelled by objects of the so-called Drinfeld centre of the fusion category, which itself is a fusion category [60,[71][72][73][74]. In our case, the labels can be interpreted in terms of mass and spin, denoting the amount of curvature and torsion constraint violation respectively. Next, we define operators which do generate these excitations. As in the case of the (2 + 1)dimensional BF representation these are given by ribbon operators, but are now however labelled by an object of the Drinfeld centre. We will argue that these ribbon operators provide indeed generalizations of the holonomy and flux operators. The (cylindrical) consistency of operators splits again into two parts. First, the operators need to be well-defined on a fixed Hilbert space H Σp , i.e. compatible with the equivalences of graph states which implement changes of the triangulation. This will be imposed by a so-called naturality condition of the half-braiding, a structure which comes with the Drinfeld centre. Second, the ribbon operators need to be compatible with the addition of punctures. This will follow from the ability of gluing ribbons together along punctures. In summary, these ingredients will define a representation of a (q-deformed) holonomy-flux algebra based on the vacuum defined by the TV state sum model [42]. The excitations agree with the ones defined by the extended TQFT based on this model [70]. The ribbon operators, which represent (combinations of) holonomies and fluxes allow us the generate, manipulate, and measure these excitations. Having presented an overview of our results, we now turn to the details of the construction. III. GRAPHICAL CALCULUS In this section, we introduce the basic algebraic data which we are going to use for our construction, along with graphical rules which will enable us to efficiently depict and manipulate various structures (such as states, operators, morphisms, and inner products). As explained in the introduction, we are interested in data coming from the quantum group SU(2) k at root of unity, and more precisely in the fact that its representation theory gives rise to a modular fusion category. In order to avoid wandering through lengthy mathematical definitions, we are (safely) going to overlook the unnecessary details and instead focus on the physical meaning of the various axioms defining such a category. However, as a result of this simplification, one should remember that connection which is flat. It is this notion of flatness which we are going to encode later on in terms of F -symbols. In this sense, when speaking about curvature, we therefore mean curvature in excess of the homogeneous curvature encoded by the vacuum state. 11 This torsion is defined as a violation of the Gauss constraints given by the TV model. Note that when translating these Gauss constraints in terms of fluxes they appear to be deformed as compared to the Gauss constraints in the flat case [52,89,90]. most of the properties introduced below will only hold for SU(2) k and not for arbitrary fusion categories. We refer the reader to appendix A for details about the structure of SU(2) k , and to [91] for generalities about fusion categories and their relation to quantum groups. A review of a different version of graphical calculus for SU(2) k can be found in [92,93]. Here we will mostly follow the notations of [59], although with certain slight adaptations. A. SU(2) k as a modular fusion category Loosely speaking, a ribbon fusion category consists of a finite set of labels describing the objects of the category, so-called fusion coefficients and F -symbols describing the fusion structure for these labels, and an R-matrix describing their braiding 12 . A ribbon fusion category is sometimes also called a premodular fusion category, and when its topological S-matrix is non-degenerate it becomes a modular fusion category 13 . We are going to explain why SU(2) k does indeed posses all this structure. When the deformation parameter is a root of unity, the quantum group SU(2) k only has a finite number of irreducible representations with non-vanishing and non-cyclic quantum dimensions. These representations are labelled by half-integer spins i, j, k, . . . which take values in the set J = {0, . . . , k/2}, and their quantum dimension is given by the quantum evaluation d j = [2j + 1] defined in (A3). This in itself already defines the structure of a category whose objects 14 are the representations and whose maps are morphisms between them. The fusion structure on this category comes from the existence of a well-defined tensor product between representations. More precisely, the recoupling of two representations is given by the fusion rule where the δ ijk are the so-called fusion coefficients, and where the sums should from now on be understood as running over the set J of spin labels. The fusion coefficients satisfy the conditions The first condition simply means that the spin j = 0 is the unit element of the fusion algebra, while the second one reflects the associativity of the fusion. Furthermore, the fusion coefficients satisfy the symmetry properties δ ijk = δ jki = δ kji . Explicitly, a triple of spins is said to be admissible, i.e. one has δ ijk = 1, if and only if the following four conditions are satisfied: If a triple is non-admissible then δ ijk = 0. Since δ ijk ∈ {0, 1}, we say that SU(2) k has no fusion multiplicities. 12 Technically, this only defines a braided fusion category. A ribbon fusion category requires an extra compatibility condition between the pivotal structure of the category and the braiding. Since this is satisfied for SU(2) k (which in fact gives rise to a spherical category) we ignore this detail and simply use the term "ribbon fusion category". 13 Since the S-matrix is defined via the R-matrix, and therefore requires a notion of braiding, we will from now on omit the term "ribbon" when talking about a modular fusion category. 14 In the condensed matter literature, these objects are often referred to as particles, anyons or charges. Here we will refrain from using such a nomenclature, and reserve the term "anyon" for the quasi-particle excitations on top of the vacuum which we will encounter later on. As a remark, let us point out that one can introduce the fusion matrices N i whose coefficients are (N i ) jk := δ ijk . Using (3.2), one can then compute that the product of two such matrices is given by N i N j = k δ ijk N k , which shows that the fusion matrices form a representation of the fusion algebra. We are later on going to encounter an action of these matrices on one-dimensional representations of the fusion algebra. We are now ready to introduce the first elements of graphical calculus. For this, we simply assign to each representation of spin j an unoriented and smoothly deformable strand The absence of orientation comes from the fact that the representations are self-dual. This has an important subtle consequence, since one has to keep track of sign factors α j := (−1) 2j called the Frobenius-Schur indicators. In particular, the quantum trace of a single strand is given by and is therefore not necessarily positive (although always real). We will make the choice of square root v j = (−1) j d j (with (−1) 1/2 = i) for all v 2 j . As we will see in calculations later on, the v j 's will typically appear in combinations of the form e.g. δ ijk v i v j v k , which are real. As usual, the fusion or splitting of representations in this graphical calculus is represented by trivalent nodes of the type and a trivial representation can always be grafted to a strand in the sense that This simply means that the strand with spin label j = 0 is invisible, and therefore can always freely be inserted in order to fuse two parallel strands using F -symbols. The above-mentioned F -symbols appear when considering the recoupling of four representations, where they describe the change of basis between Hom i, j × (k × l) and Hom i, (j × k) × l . which we will refer to as the F -move. In appendix A we give the explicit definition of these Fsymbols in terms of the quantum 6j-symbols of recoupling theory. Geometrically, if we think of the trivalent nodes as being dual to triangles, and of the strands as being dual to edges, it is well-known that equation (3.8) is the algebraic counterpart of the 2-2 Pachner move. What is important for our purposes is that the F -symbols satisfy the following relations: Physicality: Tetrahedral symmetry: Pentagon identity: The physicality condition is simply a condition of admissibility on the spin labels entering the Fsymbols, and can be read off of the defining equation (3.8). If one considers (3.8) in the triangulation picture, the physicality conditions implement the triangle inequalities (with the spin attached to a strand giving the length of the dual edge). The reality of the F -symbols, which simply comes from their definition (A8), implies that the fusion category is in fact unitary. In the geometrical setting where F -symbols are attached to tetrahedra, the pentagon identity is often referred to as the Biedenharn-Elliot identity. With all this structure at hand, we have so far defined a (unitary) fusion category. The ribbon structure of the category is simply inherited from the non-trivial braiding in SU(2) k . More precisely, the planar graphs can have non-trivial crossings, and these can be undone by using the R-matrix as follows: (3.10) We give the explicit expression for the R-matrix of SU(2) k in appendix A. The R-matrix also has to satisfy certain consistency relations known as the hexagon identities, which ensure that the braiding is consistent with the F -moves. In the case of SU(2) k , since R ij k = R ji k , there is only one such relation instead of two, and it is given by The existence of this non-trivial braiding structure is the reason for which we refer to the elements of the graphical representation as strands, and not simply as links. Furthermore, because of the semi-simplicity of SU(2) k (or of the category), it is sufficient to restrict ourselves to trivalent graphs. Now that we have the structure of a ribbon fusion (or premodular) category, we come to the last property of interest in this work, which is that of modularity. Modularity is in fact a condition on the so-called topological S-matrix. A ribbon fusion category is said to be modular if det(S) = 0, which is the case for SU(2) k . The S-matrix has entries defined via the braiding and the R-matrix by where the elements s ij = (−1) 2(i+j) [(2i + 1)(2j + 1)] are the so-called non-normalized Verlinde coefficients, and where we have introduced the total quantum dimension In particular, one has that DS 0j = v 2 j . An important identity is that which can be shown by realizing that both graphs must be proportional as elements of the same one-dimensional vector space, and by using the definitions (3.12) and (3.5) once the strand j has been closed. More details about the definition and the properties of the S-matrix are given in appendix A. We now have all the necessary ingredients to proceed with our construction. In the next subsection, we are going to give a few useful graphical evaluations which follow from the definitions given above, and introduce the concept of vacuum strands. B. Basic graphical identities The first useful identity to consider is that corresponding to the removal of a bubble. It is given by the so-called bubble-move To prove this identity, notice first that its left-hand side, if read upwards, can be interpreted as an intertwining map from the representation space l to the representation space k. However, due to Schur's lemma, a non-vanishing intertwining map between irreducible representations requires k = l. Therefore, we necessarily have that for some coefficient β ijk . To find this coefficient, we can now close the strand k of the bubble configuration, and apply an F -move on a horizontal strand of spin 0 inside of the bubble. This gives where we have used the normalization condition (3.9e) for the F -symbol. Using ( 3.16) repeatedly, as well as the quantum trace evaluation (3.5), equation (3.17) becomes As a consequence of the bubble-move, we therefore also find that ( 3.19) Now, this relation can be used to derive the following important result: 20) where in the first step we have used an F -move with a strand of spin 0 between the two loops. This implies that the quantum trace evaluations (3.5) obey the fusion rule, or, in other words, that the v j 's provide a one-dimensional representation of the fusion algebra. Because of the fourth condition in (3.3), one has α i α j α k = 1 if δ ijk = 1, which implies that the quantum dimensions themselves also satisfy the fusion rule, i.e. Now, if we introduce the vector d whose components are given by the quantum dimensions, (3.21) can be written as that d is an eigenvector for the fusion matrices N i with eigenvalue d i . We can also see that the total quantum dimension (3.13) is nothing but the norm of this vector. Next, the bubble-move can be combined with the F -move to obtain the relation which is nothing but the algebraic expression of the 3-1 Pachner move. Moreover, as a special case of the F -move one has that the decomposition of the identity which can further be combined with the braiding relation (3.10) to obtain the following resolutions of crossings: By compatibility between these two expressions (i.e. the possibility of resolving a crossing either vertically or horizontally) one then gets the conditions These conditions are indeed satisfied for SU(2) k [94]. It allows us to omit the orientation arrows of the strands even in the case where braiding is involved. Throughout this work, we will keep in the main text only the proofs which require little space, and collect in appendix C the more lengthy results. C. Vacuum strands and loops Now, we introduce the important concept of vacuum strands. These are defined as the weighted sum of all possible standard strands divided by the total quantum dimension, i.e. If we think of the standard strands as representing graphically elements of the fusion algebra, we can naturally ask what is the graphical representation of the above-mentioned eigenvector d. This is given precisely by these new dotted vacuum strands. By closing a vacuum strand so as to form a loop, one can see that where we have used (3.13) and (3.5). This means that, in the graphical representation, we can always introduce the identity in the form 1 = (vacuum loop) × D −1 . We will make extensive use of this property later on. Once they are closed so as to form loops, these vacuum strands have the remarkable property of being invisible to standard strands, which can slide over them as follows: For this proof, we have used in the second equality an F -move with a strand of spin 0 between the strands j and k. It is important to notice that this sliding relation holds regardless of what is contained in the shaded region, even if it is a puncture in the manifold. In what follows we will always use a thick dot to represent punctures. Making use of this sliding property, we can now show that the stacking of two vacuum loops gives In order to prove this we have used property (3.14), the fact that S 0j = v 2 j /D, and in the last step the unitarity of the S-matrix 15 . Note that the last equation, as well as the identity (3.31), therefore hold only in modular fusion categories (such as SU(2) k ). This annihilation property justifies the name "vacuum loop". Finally, by combining this relation with an F -move we obtain Note that this relation is compatible with (3.23) because of the equality which holds because of ( 3.20). With all these ingredients of graphical calculus and their algebraic interpretation, we are now ready to define the graph Hilbert space and the vacuum. IV. GRAPH HILBERT SPACE In this section, we are going to use the rules of graphical calculus introduced above in order to assign a graph Hilbert space to two-dimensional manifolds with defects [59,70]. We will then see in which sense certain states in this Hilbert space describe the TV vacuum, while others represent excited states carrying curvature and torsion (i.e. violations of the flatness and Gauss constraints). In the next section we will then focus more specifically on these excitations, and explain how they are related to ribbon operators and the structure of a Drinfeld center. Following the idea of extended TQFTs, the excitations on top of the vacuum should be carried by (or located around) defects in the manifold. In the construction of the (2 + 1)-dimensional SU(2) BF vacuum [24][25][26], we have realized this in a geometrical setting where triangulations of the spatial manifold play an important role. There, indeed, the defects were located on the embedded vertices of the triangulation. The refinement operations were given by the Alexander moves, which add triangles and vertices (i.e. defects) in the flat vacuum state. Here, instead, we are going to start with a manifold containing embedded punctures without any a priori reference to a specific triangulation. Let us therefore consider a two-dimensional compact and orientable Riemann surface Σ p with an arbitrary number p of embedded punctures. For reasons which will become clear later on, we should think of the punctures as being obtained by removing discs and placing an embedded marked point on each corresponding boundary component. One can see Σ p as arising from placing p punctures in an initial manifold Σ. As a remark, notice that instead of representing the punctures as removed disks with a marked point on the boundary, one could also use an embedded point together with a vector attached to it [70]. The strands (representing objects of the fusion category SU(2) k ) and later the ribbons (representing objects of the Drinfeld centre of SU(2) k ) then have to arrive to the point representing the puncture by being tangential to this vector. This information (or equivalently the marked point on the boundary of the removed disk) is important for the proper identification of the excitations. In particular, it will make a difference if a strand goes "straight" into the puncture, or instead first winds around the puncture and then enters it. The difference between these two situations is a Dehn twist of the cylinder-like region around the puncture. Note that in our graphical representation we will suppress the marked points (or the vectors) on the punctures for the clarity of the drawings. Punctures will be depicted by thick black dots and the (suppressed) vectors will be always horizontal, either pointing to the right (if the puncture is located on the left part of the figure) or to the left (if the puncture is located on the right part of the figure). On the manifold with embedded punctures, we can now define the vector space of graphs. Definition IV.1 (Space V Σp of graphs). First, consider a trivalent graphs embedded in Σ p , and allow for the possibility of having, for each puncture, a single strand ending at the marked point. Then, given a fixed level k and the corresponding set J of spins, consider colorings of this graph with spins such that the admissibility conditions In order to turn V Σp into a Hilbert space H Σp , we need to understand graphs in Σ p as states, and in particular specify a basis and an inner product for these states. As we will now see, this construction depends on the number of punctures, on the topology, and on the level k. For the sake of clarity, we are going to focus in the rest of this article solely on the two-sphere, and refer to it simply as the sphere. The construction of the basis states for the higher genus case is explained in appendix B. A. Basis states Let us assume that the punctured manifold Σ p is a p-punctured sphere S p . First, if there are no punctures, it is clear that because of the trivial topology any graph on S 0 can be evaluated to a C-number using the rules of graphical calculus introduced in the previous section. We therefore have that 16 dim H S 0 = 1, and the unique basis state can be chosen to be the empty graph. If there is a single puncture, it follows from ( 3.15) that dim H S 1 = 1 as well. An example of a state on S 1 is represented in figure 1. The first non-trivial case is that of the sphere with two punctures, S 2 , which is topologically a cylinder. Because of this non-trivial topology there is a non-contractible cycle around which strands can wind, thereby preventing certain graphs from being reduced to a number (i.e. to a certain coefficient multiplying the empty graph on S 2 ). One can describe a basis of H S 2 by considering the (4.1) Because this graph has two trivalent nodes, the dimension of the graph Hilbert space (i.e. the number of allowed states) on the cylinder is given by dim H S 2 = ijrs δ ijr δ ijs , and therefore depends on the level k. For example, if k = 1, there are only two allowed spin labels, J = {0, 1/2}, and four basis states. These states are represented in figure 3. In particular, one can see that the second and fourth states cannot be identified due to the nature of the punctures. Indeed, the embedded marked point prevents us from unwinding the strand which goes around the upper puncture and from smoothly deforming the fourth state into the second one. It turns out that states on the cylinder will play a very important role in what follows. We will study their properties in detail in the next section. We can now easily generalize this construction to the sphere S p with p punctures. In this case, 17 When using these planar representations one should always remember that the graphs are actually defined on the sphere. a minimal graph has a tree-like structure and basis states can be written in the form Such a graph has (5p − 6) strands for p ≥ 2 punctures, and, as explained above, the basis states are given by the admissible spin colorings of the strands. The dimension of the Hilbert space for p ≥ 2 is given by where the vector label denotes a set of spins to be summed over, i.e.  = {j 1 , . . . , j p−1 }, and each sum should range over the set J of spin values determined by k. Let us end this subsection by a comment on triangulations. Since the graphs which we are considering are trivalent, they are dual to (possibly degenerate) triangulations. These triangulations of punctured manifolds can be built in a systematic manner as follows: First, recall that a puncture is represented by a marked point on the boundary of a hole obtained by removing a disk. In order to obtain a triangulation of a punctured manifold, it is important to draw explicitly these holes and to place the marked points on their boundaries. One can then place a circular triangulation edge on the boundary of each hole, i.e. an edge whose source and target nodes are the same, and choose this node to be opposite to the marked point (with respect to the center of the hole). By doing so, one obtains a triangulation of the boundary of the hole on which are sitting the marked point and a node. Once a circular edge has been drawn for every hole, one can connect the nodes with additional edges so as to obtain a triangulation of the punctured manifold. By triangulation, we mean in particular that every face has to have three boundary edges. A particular class of triangulations of the punctured sphere S p are so-called minimal triangulations. These have the defining property of having only p nodes, which are the nodes of the circular edges surrounding the puncture holes. Such a minimal triangulation for the two-punctured sphere is represented on figure 4. The graph dual to such a minimal triangulation is of the form described in definition IV. 1. The equivalence relations (3.4), (3.8), and (3.15) allow to deform, refine, and change the graph, and the corresponding changes of triangulations includes the 2 − 2, 3 − 1, and 1 − 3 Pachner moves, which are ergodic on the space of two-dimensional triangulations with fixed topology. Notice also that vertices of the triangulation (if this latter is not minimal) which are not situated on the boundaries of holes can freely be moved, thereby realizing (spatial) diffeomorphism symmetry [96][97][98] for all vertices not associated to punctures. For the punctures themselves, one would need to perform a group averaging over spatial diffeomorphisms, as is done in the AL case [3]. B. Inner product We have so far obtained a systematic way of writing down basis states for the space of equivalence classes of graphs on the punctured sphere. An inner product on H Sp can now simply be obtained by declaring these basis states to be orthonormal. However, for this definition to be consistent, one has to ensure that it is independent of the choice of the basis states themselves. Part of the arbitrariness in choosing these states has already been fixed by restricting ourselves to the minimal graphs with the general tree-like structure of (4.2). However, we are still left with the freedom of choosing these minimal graphs up to local moves which preserve the number of strands, i.e. up to the F -moves of equation (3.8). In order to see explicitly that the inner product is indeed independent of the choice of basis states, let us focus for simplicity on the case with two punctures. Two choices of basis states which differ by an F -move are given by By definition, the inner product constructed from the basis Q is Using the orthogonality relation (3.9c), we can check that the inner product defined by the basis Q gives the same result, i.e. This calculation can easily be generalized to any number of F -moves affecting the tree-like minimal graph of the basis (4.2), thereby showing that the inner product is indeed independent of the choice of basis states, and providing us with a consistent definition. Although this inner product has a simple definition in terms of a basis on S p , it can be cumbersome to use if one wants to compute the norm of "complicated states" (i.e. with braiding for example) since these then have to be expanded in the basis itself. For this reason, it will be useful later on to consider an alternative inner product, which in [59] is called the the trace inner product. The trace inner product Ψ Γ |Ψ Γ tr between two states on graphs Γ and Γ is defined 18 by reflecting the graph Γ , connecting the open strands of the two graphs (i.e. those ending at the punctures), reducing the resulting closed graph to a number, and dividing by a factor of v j for each pair of strands being connected. In this definition, we have to be precise about the way in which the open strands should be connected, since there can be obstructions to doing so without an additional step. First, notice that it is well-defined and unambiguous to connect the open strands ending at punctures. This is because the punctures are embedded (i.e. posses a specific location) and can therefore be identified and matched two-by-two between states Ψ Γ and Ψ Γ defined on the same punctured manifold Σ p . Now, for graphs with no closed strands going around the punctures, the definition of the trace inner product is immediate to apply, and for example one has that in agreement with the orthonormality of the basis states for S 3 . However, for states with closed (standard or vacuum) loops going around the punctures, the open strands ending at these punctures in Γ and Γ cannot a priori be connected. In order to define the trace inner product for such states, one has to add the following additional step. For each pair of punctures in Γ and Γ whose open strands can a priori not be connected, divide by a factor of D, and add a closed vacuum loop passing through and linking the two closed loops around the punctures. Taking the three-punctured sphere as an example, we prove in calculation (C1) of appendix C that this prescription for the trace inner product does indeed produce the result Q ı  n r s Q ı  n r s tr = δ ı ı δ   δ n n δ r r δ s s , (4.8) as expected. The trace inner product does in fact posses a geometrical interpretation. Once possible closed strands have been removed from around punctures by inserting the appropriate number of vacuum loops and factors of D −1 , the trace inner product amounts to reversing the orientation of the manifold Σ p on which the state Ψ Γ is defined, and then gluing two-by-two the marked points of the punctures between Ψ Γ and Ψ Γ by identity-connected diffeomorphisms. Then, the holes of the resulting higher genus manifold have to be filled, which results in a topologically trivial manifold on which the glued graph can be evaluated to a number. This schematic explanation is the reason for which, when computing the trace inner product, the punctures disappear once their open strands have been connected. In particular, it also means that two naked punctures in Ψ Γ and Ψ Γ will disappear (or be connected by strands of spin j = 0) under the trace inner product, and therefore not contribute to it. C. Vacuum We have so far obtained a description of the graph Hilbert space H Σp and of its basis states. As explained above, a generic state in H Σp can have open strands ending at the punctures as well as strands winding around the punctures. In fact, a puncture has two special properties. First, it allows for open strands to end, thereby representing violations of the Gauss constraint. Second, it prevents strands from being freely deformed or moved on Σ p , thereby representing violations of the flatness constraint 19 . In this sense, the punctures can carry torsion and curvature excitations. By definition, states which do not present such excitations correspond to the vacuum. In the vacuum, there are no open strands ending at the punctures, and strands can freely be pulled over the punctures. States in the vacuum can therefore be seen as living effectively on a manifold with trivial topology, and thus can always be reduced to a coefficient multiplying a minimal vacuum state. In this sense, the vacuum of the cylinder is given by a single state equivalent to the empty graph where the punctures are made "invisible" by being surrounded by vacuum loops. On punctured manifolds of arbitrary topology however, the vacuum can be degenerate and described by several states. In order to understand this in more details, one can introduce the Gauss and flatness projection operators. As is well-known from the spin network representation of lattice gauge theory, the Gauss constraint is defined at the three-valent nodes by the requirement that the triple of spins labeling the incoming strands be admissible in the sense of (3.3). Let us schematically denote by n(i, j, k) a three-valent node connecting strands with spins (i, j, k). With this notation, we can then define the action of the Gauss projection operator as Because SU(2) k has no fusion multiplicities, this action is simply equal to zero or one depending on whether the given fixed triple (i, j, k) of spins is admissible or not, and therefore does indeed define a projector. When acting on generic basis states with free spin labels, as for example (4.1), B n should be understood as a function of the spins which multiplies the state it acts on by a factor of δ ijk . This implements in turn the relations B n n(i, j, 0) = δ ij n(i, i, 0) and B n n(i, 0, 0) = δ i0 n(0, 0, 0). In particular, we should now see the marked point of a puncture as a three-valent node with spins (i, 0, 0), where i is the spin labeling the incoming open strand. The operator B n can therefore act on a marked point, and does so by removing the open strand and inserting a factor of δ i0 . This factor of δ i0 then constrains the node n(i, r, s) from which the open strand was departing, and together with B n n(i, r, s) = δ irs n(i, r, s) consistently enforces that n(i, r, s) = δ i0 δ rs n(0, r, r). Our definition of the Gauss projection operator therefore guarantees that, when acting on a generic state in H Σp , the operator n B n returns a state in H Σp . It is clear that since we have defined the graph Hilbert space H Σp as being spanned by graphs with admissible spin colorings at the threevalent nodes which are not marked points, only the punctures can potentially carry violations of the Gauss constraint. Similarly, the local equivalence relations which define the graph Hilbert space implicitly guarantee that, on the sphere, there is no curvature located "away from the punctures". The punctures represent the only obstruction to deforming graphs and evaluating them on the sphere, and in this sense can be thought of as carrying the curvature. In order to understand the vacuum as a state with no curvature, let us introduce the flatness projection operator acting on punctures as In this definition, we have included a factor of D −1 in order to ensure that B 2 p = B p . Because of the fundamental sliding property (3.28) between strands and vacuum loops, we see that the operator B p renders the punctures "invisible" by freely allowing strands to slide over them. In this sense, B p removes the curvature located at the punctures. This property can also be understood algebraically by going back to the classical group picture. In the limit q → 1, if we interpret the spin label j as being the Fourier component of a group element, we see from the definition of the vacuum loops that 20 where g p is the holonomy around the puncture. The flatness operator is therefore imposing that the holonomy g p around the puncture be trivial. For SU(2) k at root of unity, since there is no Fourier transform defining a group representation, the only way to define a flatness projection operator is via the spin representation (4.10). In the next two subsection we will explain the relationship between B p , the embedding maps, and the TV partition function on a three-manifold. We can now define the vacuum as the space of states in the original graph Hilbert space H Σp which are invariant under the action of the Gauss and flatness projection operators for all nodes and punctures. Explicitly, this is where Ψ Γ denotes a state in H Σp defined by a graph Γ. In the case of the p-punctured sphere, this space is one dimensional and a basis is given by the unique vacuum state corresponding to the empty graph on S p . Since the operators B n and B p commute (both with each other and together, and independently of the location of the punctures), there is no ordering ambiguity and the definition (4.12) is meaningful. We explicitly show this on the example of the basis Q for H S 2 in appendix D. Notice that there is a natural isomorphism H 0 Σp H Σ between the space (4.12) of vacuum states and the graph Hilbert space on the manifold Σ without any punctures. Finally, let us conclude this subsection by introducing the Wilson loop operator. This operator is defined by inserting a normalized Wilson loop in Σ p , i.e. This is nothing but a measure of curvature. Indeed, if W l acts "away" from punctures, the loop can be evaluated and, together with the normalization coefficient, one gets the identity. This is because the local equivalence relations defining the graphs amount to having flatness away from the punctures. Similarly, if W l acts on a puncture surrounded by a vacuum loop, i.e. a puncture in the flat vacuum state, the l loop can be pulled over the puncture and one gets the identity again. Therefore, there is curvature located in a region whenever the action of the operator W l is, for any l, different from the identity. This is the case in particular around a naked puncture (i.e. a puncture not in the vacuum state). D. Embedding maps Now that we have obtained a description of the vacuum on the graph Hilbert space H Σp , we are able to define refinement operations and embedding maps. These are the operation which will enable us, in particular, to extend the inner product of section IV B to states which live on the same underlying manifold but with different numbers of punctures. As we have seen, a state carries degrees of freedom in the form of torsion and curvature excitations located at the punctures. When refining a state, one allows for the possibility of describing more degrees of freedom by adding a puncture in the vacuum configuration, and this puncture can then be excited with the creation operator which we will define later on in section VII. More precisely, the embedding map defined by the vacuum is the mathematical structure which enables us to refine a given state and embed it in a continuum Hilbert space defined as an inductive limit over the punctures. The embedding maps are given by the operation of adding embedded punctures in the vacuum state. Remember that a puncture in the vacuum state is simply represented graphically by a puncture surrounded by a vacuum loop. If we denote by Σ p+q the punctured manifold obtained by adding q punctures to the manifold Σ p , the embedding maps are given by where the symbol × q simply means that we are attaching q vacuum punctures to the state Ψ Γ . As a technical remark, note that we are carefully using the notation Σ p+q to indicate that the embeddings of the p common punctures of Σ p and Σ p+q have to agree. Similarly, the embedding maps in the SU(2) BF vacuum [26] are defined between a coarse triangulation ∆ and a finer triangulation ∆ obtained by refining ∆, which means that all the embedded vertices of ∆ are contained in ∆ . Here a similar requirement is necessary for the punctures. With these embedding maps, we are now able to define the inner product between states on manifolds Σ p and Σ q where the punctures are not required to have the same location. For this, one has to use the map E p,p+q to add the punctures of Σ q to Σ p , and, likewise, the map E q,q+p to add the punctures of Σ p to Σ q . In case one wishes to embed a new puncture at a position already occupied by a strand or a node of the graph, one first has to slightly deform the graph using the equivalences relations (3.4), (3.8), and (3.15). By doing this operation, one embeds the states Ψ Γ ∈ H Σp and Ψ Γ ∈ H Σq into the common larger Hilbert space H Σ p+q on which the inner product can then be computed. In fact, the punctured manifold Σ p+q on which this larger Hilbert space is defined can be thought of, in the terms of [24][25][26], as the common refinement of the manifolds Σ p and Σ q . It is necessary to resort to this common refinement in order to compare states which have support on manifolds with different number or location of the punctures. As an illustrative example, one can consider the states whose inner product can be computed by embedding Ψ Γ in H S 3 . One then finds that To compute this trace inner product we have simply applied the definition given above. In particular, we have divided by a factor of D and linked an additional vacuum loop through the vacuum loop encircling the new puncture in Ψ Γ . Then, using the fact that = 1 (4.17) removes the vacuum loop encircling the new puncture in Ψ Γ , we have connected the two naked punctures with strands of spin j = 0 and evaluated the rest of the trace inner product graph. In particular, this calculation shows that the two states in (4.15) are not orthonormal because of the factor of D −1 in their inner product. This is to be expected given the fact that a naked puncture is different from a puncture in the vacuum state, i.e. a puncture surrounded by a vacuum line. The former carries curvature while the latter does not. Let us now show that the inner product is cylindrically consistent with respect to the embedding maps. This crucial property, which also holds in the AL and SU(2) BF representations (although in a different technical setting), ensures that the inner product between two states is independent of the choice of refined punctured manifold on which it is evaluated. If we denote by Ψ Γ and Ψ Γ two states in H Σp , the property of cylindrical consistency translates into the following equation: Using the definition of the trace inner product and of the embedding maps, it is straightforward to show that this relation holds. In fact, it is sufficient to show that it holds for the case q = 1 since then the result can be extended recursively. From the definition of the trace inner product, one can see that the inner product between disconnected components of the graphs can be computed separately and then multiplied back together. In particular, this means that which shows that cylindrical consistency amounts to the requirement that a puncture surrounded by a vacuum loop be a state of unit norm. This is indeed the case since together with the fact that a vacuum loop away from a puncture is in fact equal to the number D. Finally, let us comment on the following important subtlety: so far, with our definitions, two Hilbert spaces H Σp and H Σq might not have a common refinement. This lack of a common refinement leads to superselection sections in an inductive limit Hilbert space based on the embedding maps (4.14). This possible lack of a common refinement is due to the marked points: consider the case that two surfaces Σ p and Σ q agree in the positioning of the punctures, to the extend that the same small disks have been removed from Σ. However the punctures do disagree in their marked points. Since we allowed only one marked point per puncture (correspondingly, one open strand possibly ending at this puncture), we cannot construct a common refinement for this case. One possibility to deal with this situation is to prescribe a certain way for choosing the marked points at the punctures (e.g. by fixing a coordinate chart and using the coordinates to this end), so that this situation does not arise. Another possibility is to allow for an arbitrary number of marked points (and correspondingly open strands possibly ending at the punctures). We will briefly discuss this possibility in the appendix G. E. Relationship with the Turaev-Viro state sum One of the numerous motivations for working with the BF representation and its Λ = 0 generalization to the TV representation, is that it connects the kinematical structure of the canonical theory together with the spin foam amplitudes of the covariant approach. As already explained in [24] for the BF representation, the reason for this is that the embedding maps describe refinements of the triangulation which can seen as the gluing of flat tetrahedra (and their associated spin foam amplitudes) onto the triangles being refined. In this section, we explain how this is also realized in the TV representation, and explain the relationship between the kinematical vacuum of section IV C and the TV state sum. The TV state sum, whose definition is recalled in appendix E, is a prescription for computing a bulk topological invariant for a triangulated three-dimensional manifold with possible boundary components. It can be understood as taking as input some boundary data, which consists of a triangulation ∆ of the boundary Σ, and a coloring ψ of its edges with spins so that the coupling rules at the dual nodes (or equivalently for spins associated to one triangle) are satisfied. These data live in state spaces K Σ,∆ . This already bears a resemblance with the TV representation, since this latter is based on the graph Hilbert space H Σp whose elements are colored three-valent graphs, and three-valent graphs are dual to triangulations. Here we assume that the coloured graphs in H Σp satisfy the Gauß constraints (thus the coupling rules at the nodes of the dual graph hold). More precisely, one can always map by duality a state in K Σ,∆ to a state in H Σ\∆ 0 , where Σ\∆ 0 is the punctured manifold obtained by removing from Σ the vertices of its triangulation. Now, just like the space of ground states H 0 Σp is defined in (4.12) as the image of a projection operator, the TV state sum assigns to two-dimensional manifolds a vector space which is the image of the projector defined by the state sum itself, and which can be shown not to depend on a particular choice of triangulation ∆ [54]. Then the precise relationship between the TV state sum and the graph Hilbert space, which has been proven in [54,70], is that there is an isomorphism K 0 Σ H 0 Σ . This result, which means in essence that the TV state sum is a projector onto the TV vacuum state, is an isomorphism between spaces which are characterized by the manifold Σ alone, and by no other auxiliary structure (such as triangulations or punctures) thereon. This result can also be understood by looking at the explicit algebraic relation between the projection operator B p and the TV state sum. Consider for instance that the initial and final boundary triangulations are the same. Such boundary triangulations can then be connected by a series of so-called tent moves [99][100][101], which have the property of not changing the connectivity of the triangulation. Starting from the initial triangulation, in order to ensure that the bulk triangulation has everywhere some "thickness", one has to apply a tent move to every vertex of the triangulation. It turns out that the TV state sum for a tent move on a vertex can be seen as an operator acting on H Σ\∆ 0 . This operator agrees with the action of the operator B p . In order to see this, one can first map the initial (colored boundary triangulation) state in K Σ,∆ to a state in H Σ\∆ 0 . This is done by considering the dual graph Γ ∆ and the spin coloring of its strands inherited by duality from the edges of ∆. In this duality, the vertices ∆ 0 can be seen as punctures at the centre of the faces of Γ ∆ . One can then compute the action of the projector B p on such a face/puncture. We recall in (C6) (in the case s = 6) the proof that where we have used the totally tetrahedral-symmetric symbol G ijm kln := F ijm kln /(v m v n ), and omitted for clarity the puncture in the middle of the face. This formula is the algebraic expression for the tent move, which, seen from the point of view of the triangulation, corresponds to the following gluing of tetrahedra: with labels e.g. . (4.24) Each of the G-symbols in (4.23) labels a tetrahedron with the pattern indicated above on the example of G n 4 l 3 n 3 j 3 kj 4 . The gluing of tetrahedra in (4.24) represents a triangulation of a ball B, whose TV state sum is related to (4.23) via the following formula: Note that the prefactors which appear on the right-hand side of this expression can be reabsorbed in the definition of the map K Σ,∆ → H Σ\∆ 0 (as is done in [70]). Equation (4.25) gives a relationship between the inner product in H 0 Σp with insertion of a projector, and the TV partition function for the example of a tent move. This relation can be generalized in two ways. First, one can consider the partition function between initial and final triangulations which differ from each other. Since the partition function is independent from the choice of bulk triangulation, we can choose any arbitrary one. We can first preform a tent move on each vertex of the initial triangulation, which does not change the connectivity of the initial triangulation but projects out any curvature from the state associated with the initial spin coloring. We can then connect the initial and final triangulations by some bulk triangulation. This can be interpreted as gluing tetrahedra to a series of hypersurfaces, starting with the initial triangulation and ending with the final triangulation. The gluing of tetrahedra to the hypersurface corresponds to a change of the two-dimensional triangulation via 2 − 2, 3 − 1 and 1 − 3 Pachner moves [101]. These Pachner moves are implemented by the state equivalence relations (3.4), (3.8), and (3.15). Now, one can check that the equivalences are defined in such a way that each gluing of a tetrahedron comes with a G-symbol attached to it, which is also consistent with the gluing rule for the TV partition function. The inner product between two states in H Σp is defined by using the state equivalences (3.4), (3.8), and (3.15) to transform the two states to a common computational basis (in particular a common underlying graph). One can then evaluate the inner product using the orthonormality of this basis. These steps can again be understood through the relation of the equivalences to gluing tetrahedra as including a further refinement of the bulk triangulation, which leaves the TV partition function unchanged. In summary, (4.25) generalizes to different initial and final boundary triangulations with respective spin colorings ψ and ψ , and one has that (4.26) A further generalization of this result is to allow for punctures with curvature and possible torsion excitations as well. This requires the generalization of the TV state sum model to the extended TV model [54]. In the (2 + 1)-dimensional picture, the two-dimensional punctures are "evolved" into three-dimensional tubes which have to be excised from the three-dimensional manifold. One can then show that the extended TV state sum computes the inner product in H p between boundary kinematical states with excitations. V. THE TWO-PUNCTURED SPHERE We have so far developed a general understanding of the graph Hilbert space and the vacuum states on the punctured sphere. As we will now see, in the case of the cylinder the Hilbert space and its basis states do actually posses some extra mathematical structure, the understanding of which will play an important role in the construction of the ribbon excitation operators. We devote this section to the study of these properties. On punctured manifolds, there exists a gluing operation # along the punctures. If we denote by Σ g p a p-punctured manifold of genus g, leaving aside properties other than topological, this operation is such that Σ g 1 p 1 # Σ g 2 p 2 = Σ g 1 +g 2 p 1 +p 2 −2 . Two-punctured spheres therefore play a special role from the point of view of this operation, since two of them can be glued along a puncture to again obtain a two-punctured sphere, i.e. Σ 0 2 # Σ 0 2 = Σ 0 2 . Moreover a two-punctured sphere can always be glued along a puncture to another punctured surface of arbitrary topology without changing this topology or the number of punctures, i.e. Σ g p # Σ 0 2 = Σ g p . In the previous section, we have constructed a Hilbert space of states by considering graphs on punctured manifolds. Since each puncture carries a marked point where open strands are allowed to end, when gluing punctured manifolds we can also consistently glue their graphs by matching the strands at the marked points. In this sense, we can understand the gluing of a two-punctured sphere onto a punctured manifold Σ p as the action on H Σp of an operator, where the details of this action depends on the graph state on the two-punctured sphere being glued. Let us explain in more details how this comes about. Recall that the two-punctured sphere is topologically equivalent to a cylinder. Since a cylinder can be seen as a quadrangle with two opposite sides identified, the two-punctured sphere can be minimally triangulated by two triangles, as represented on figure 5. This is just another way of understanding the result of the previous section, namely the fact that an orthonormal basis on the cylinder is given by the Q states defined in (4.1). Since these Q states represent a basis, any other state in H S 2 can always be written as some linear combination f (Q). The gluing of a cylinder state can therefore be understood as an operator 21 We are going to encounter below examples of states expanded in the Q basis and explicit computations of their action as operators. In particular, among all the operators defined by states on the cylinder, operators that act as projectors will play an important role in what follows. They are the operators which will enable us to characterize via certain stability conditions the quasi-particle excitations (i.e. the excited states of the theory) and to construct the excitation operators. In order to understand the role of states acting as projectors and to see in which sense they satisfy a certain stability property, let us first focus on states which only carry curvature excitations. Moreover, this action is also non-trivial on the state Q 00 00 with a loop of spin j = 0, in which case one finds W l Q 00 00 = v −2 l Q ll 00 . This is just a manifestation of the fact that naked punctures carry curvature. We see from the simple calculation ( 5.4) that the states Q jj 00 are not eigenstates of the Wilson loop operator. Instead, the action of this latter leads to a superposition of these states. In this sense, the states Q jj 00 do not satisfy any particular stability property under the action of the Wilson loop operator. We are therefore naturally brought to look for states which are eigenstates of the Wilson loop operator. In terms of group variables, such states would be analogous to ψ([g]) = δ(hgh −1 )dh, which is indeed a gauge-invariant eigenstate of the Wilson loop operator. Here, in the quantum group picture with spin labels, such states are given by the graphical representation which is a strand of spin j linked with non-trivial braiding to a vacuum loop. These states have a rather complicated expansion in terms of the Q basis given by where a proof of the first equality is given in (C7), and for the second equality we have used the definitions (A10) and (A11) for the S-matrix. This is an example of an expansion O = f (Q) of a cylinder state into the original Q basis. Now, one can indeed check that these states have the desired property, namely that of being eigenstates of the Wilson loop operator. The action W l O jj 00 of this latter is given by where we have used (3.14) to obtain the non-normalized s-matrix s ij = DS ij . We therefore see that the states O jj 00 are eigenstates of the normalized Wilson loop operator, and, using the explicit expression for the s-matrix in (A11) and the quantum dimensions in (A3), one finds that the eigenvalues are given by W l O jj 00 = sin π k + 2 sin π k + 2 (2l + 1)(2j + 1) sin π k + 2 (2l + 1) sin π k + 2 (2j + 1) O jj 00 . (5.8) This should be compared with the eigenvalue of the (normalized) Wilson loop operator in the undeformed SU(2) case, which is given by for a state δ(hgh −1 )dh peaked on a class angle θ(g) ∈ [0, π]. This does indeed agree with the large k (i.e. "classical" group) limit of (5.8), where the class angle is given by θ ∼ π(2j + 1)/(k + 2), and we can conclude that the spin label j in O jj 00 is a measure of curvature. Note that the calculation (5.7) is a special case of the following more general result: which tells us how the state O jj 00 changes when it crosses a strand of spin l. A proof of this relation is given in (C8). One can see that by closing the strand l (while avoiding the left puncture) and dividing by v 2 l we indeed recover (5.7). Moreover, one can recognize from (5.10) that the set of states {O jj s0 } s are stable under the operation which consists in crossing a strand. We will see shortly that this property characterizes quasi-particles excitations. The appearance of the S-matrix in (5.6) and (5.7) is not a coincidence. We were looking for eigenstates of the (here non-normalized) Wilson loop operators W j = v 2 j W j . As can be seen from (5.4), the Wilson loop operators themselves form an algebra where the multiplication coefficients are given by the fusion matrices δ jkm = (N j ) km . Therefore, a diagonalization of (N j ) km leads to a diagonalization for the Wilson loop operator W j . As stated in (A14), such a diagonalization is provided by the S-matrix, i.e. the matrices diagonalizing N j are independent of j, and the eigenvalues are given by λ The eigenstates S pk W k are thus also defining modules of the algebra (5.11), i.e. these states are invariant under the action of W j for all admissible spins j. In the following subsection, we are going to generalize this kind of reasoning to more generic states. B. General states and the tube algebra Let us now consider the general states on the cylinder given by the orthonormal Q basis (4.1). As we have explained above, cylinders can naturally be glued together along punctures, and states on the cylinder can be seen as operators under this operation. Since the gluing of two cylinders gives back another cylinder, it is clear that the state obtained after the gluing can be again expanded in the cylinder Q basis. In particular, it is natural to ask what happens to the cylinder basis states themselves under this gluing. In other words, we would like to understand the action of Q, when seen as an operator, on the Q basis states of H S 2 . For the sake of definiteness, when gluing two cylinder states by matching their left and right punctures respectively, we require the spins labeling the strands to match, and otherwise define the result of the gluing to be zero. By using the local equivalence relations, we can then compute the expansion in the Q basis of the gluing of two such basis states. This amounts to computing the multiplication rule of the algebra defined by this gluing of states. The calculation, which is given in (C9), leads to If we now introduce the following linear combinations of Q basis states: we find the product rule This multiplication rule for linear combinations of Q basis states defines an algebra known in the literature as Ocneanu's tube algebra [71][72][73][74]. We have therefore shown that the generic Q basis states define at the same time a vector space and an algebra. This means that the space V S 2 of graphs on the cylinder is a representation space for the tube algebra. Now, this representation space can be decomposed into indecomposable components, which by definition are modules over the tube algebra. Modules are by definition invariant under the action of cylinder basis states, and this stability property justifies the identification of the modules with quasi-particles [60]. In fact, given the multiplication rule (5.16) for the tube algebra, we can write down the condition for the Q basis coefficients to define a projection operator. This condition is that Z × Z = Z, which is a complicated equation when written in terms of the coefficients (5.17). We are therefore naturally brought to look for states which act as projectors. For the case of SU(2) k , a set of basis states satisfying this projection property is known [59,71,72], and graphically given by 22 where we have introduced the tensors Ω rs kl,ij := As we shall see shortly, the tensor Ω will play a very important role in our construction. The projection property for the O basis states, which is proved in (C11), is that In addition, these new states O are also orthonormal, as can be seen from the trace inner product computation Notice that there is a certain arbitrariness in the choice of these basis states, since one is free to choose the opposite crossings between the vacuum loop and the strands i and j. This is analogous to the freedom in choosing the basis Q or Q in (4.4). More precisely, the two choices of O basis states are related by the transformation (6.12). given in (C12). All these properties of the O states become crucial when considering the crossing of a strand. In this situation, one can compute as in (C13) the following generalization of formula (5.10), which shows the stability of the O states: (5.23) From this calculation one can derive the following graphical representation for the tensor Ω: This graphical representation can be used to easily determine some special cases (with vanishing spins) for the coefficients of the half-braiding tensor. Although the tensors Ω have already been introduced above, it is this stability property of the O states when passing through a strand which should be understood as their definition. We have already seen in the previous subsection that the Q states do not posses this nice stability property when crossing a strand. Instead, in this case, as shown in (C14), one obtains a stacking of two Q states: Equation (5.25) is the reason why modules of the tube algebra, which by definition are stable under stacking, are also automatically stable when they cross a strand. Since these modules are spanned by {O ij kl } kl , we prefer to work with the O states instead of the Q ones. Indeed, we will see later on that the O states appear as eigenstates of so-called closed ribbon operators, which give a measure of both curvature and torsion. C. Half-braiding and Drinfeld centre We have just seen that the projection property of the O states ensures their stability when they cross a strand in the sense of (5.23). This motivates their identification as quasi-particle excitations, and the introduction of the following graphical notation: (5.26) where the quasi-particle label ξ is a combined notation for the labels (i, j, s) of O. In the literature, it is also customary to call ξ a quasi-particle of type ij, so depending on the context we will use ξ to denote either the particle type ij or the labels (i, j, s). Notice that these new doubled strands now posses an orientation. With this new graphical notation, the property (5.23) tells us how the doubled strands cross and braid with standard strands. In particular, when going over a strand we have that where the tensor Ω rp ql,ξ := Ω rp ql,ij is given in (5.20). This should be compared with the standard resolution (3.24) of a crossing between two strands. Here, instead of being resolved by a braiding R-matrix, the crossing is resolved by the tensor Ω. Mathematically speaking, the map ω : V l ⊗V ξ → V ξ ⊗V l is known as half-braiding, and is essential for the definition of the Drinfeld centre category 23 . This half-braiding has to satisfy a so-called naturality condition. Graphically, this condition amounts to being allowed to pull a double strand over a node, and is therefore given by (5.28) One can recognize that this graphical relation is simply the generalization to the half-braiding (or to the double strands) of the (Yang-Baxter) relation (D6). This equation (D6) holds as a consequence of the definition and of the axioms for the F -symbols and the R-matrix. The condition ( 5.28) represents the same kind of naturality condition as (D6), but now for the half-braiding tensor. In terms of components, this condition is given by where we have dropped the label ξ for clarity. Since we have defined the half-braiding tensors Ω via the strand sliding property (5.23) for the O basis states, which uses only the local equivalence relations (and in particular (D6) which follows from these local equivalence relations), the naturality condition for the half-braiding is automatically satisfied 24 . Notice, that the relation ( 5.29) is exactly the one we would have obtained in (5.17) by looking at the projection condition Z × Z = Z for the multiplication rule (5.16) of the tube algebra. Therefore, the solutions to the projection condition are given by the half-braiding tensors Ω used in (5.19) and (5.20). In summary, when defining the half-braiding tensor via the sliding property (5.23), we do satisfy the naturality condition (5.29). In order for this to hold, it is essential that the O states are stable under the operation of passing through an edge. From the naturality condition follows also the projection condition for the tube algebra, i.e. the stacking property (5.21) for the O states. D. Cylinder ribbon operator With all these ingredients, we are now able to give a preliminary definition of the ribbon excitation operator and to study its action on the vacuum of the cylinder. As a generalization of (5.26), let us define the following oriented ribbons, which start at a quasi-particle of typeξ =īj and end at a quasi-particle of type ξ = ij: where, as an important and useful convention, we have absorbed the factor v r /(v i v j ) into the graphical representation for the source quasi-particle as a filled disk labelled byξ (this does however not apply to the target quasi-particle of the ribbon, which does not posses any weight in the graphical representation, and is therefore just represented by (5.26)). In what follows we will denote these ribbons simply by Rξ ξ . The important thing to notice about this definition is that, when seen as a state on the cylinder, this ribbon operator reduces to This can easily be seen by using (3.23), then wrapping the left vacuum loop around the sphere so as to encircle the puncture on the right, and finally using the stacking property (3.29). The ribbon operator (5.30) carries conjugated quasi-particles excitations at its ends. Intuitively, the action of this ribbon operator on a pair of disjoint punctures should therefore amount to placing these quasi-particle excitations at the punctures, i.e. to exciting the punctures. Since these punctures on which we wish to act need not be in the vacuum state, but can already carry their own curvature or torsion excitations, the operation of placing the ribbon quasi-particle excitations at the punctures should in fact be understood as a fusion of excitations. If the punctures are in the vacuum state, this fusion is trivial. This gives us an intuitive idea of how the ribbon operator should be used. One has to start with a ribbon operator where the quasi-particlesξ and ξ are side-by-side in a same neighborhood of Σ p (i.e. not separated by strands). Then, one can move the ends of the ribbon operator to the punctures where we want the quasi-particle excitations to act. By doing so, the ribbon operator has to cross over several strands, but, importantly, the exact path of the ribbon does not matter because of the naturality condition and the sliding property (5.28). Then, one has to evaluate these crossings using the half-braiding tensors, and finally fuse the quasi-particles at both ends with the states at the punctures where they are acting. We are going to illustrate this procedure and in particular define the fusion process in the next section. By analogy with (5.27), we can write the behavior of the ribbon operator (5.30) when it crosses a strand. This defines the braiding of a ribbon operator with a strand as ( 5.32) These are the crossing identities which have to be used when acting with a ribbon operator on punctures which are "far away", i.e. separated by strands. Again, notice that the path along which we choose to connect the two punctures with the ribbon operator does not matter because of the naturality condition (5.28) for the half-braiding tensor. Notice that the half-braiding tensor for the ribbon ij crossing a strand l can be resolved into a strand i over-crossing l and a parallel strand j under-crossing l. This generalizes if the ribbon crosses several strands l 1 , . . . , l n , and all the half-braiding tensors can be resolved by drawing a strand i over-crossing the strands l 1 , . . . , l n and a parallel strand j under-crossing these strands l 1 , . . . , l n (see (7.1)). Without knowing the details of the fusion process (which we will describe in the next section) or having to resolve any crossing, we can already guess the action of the ribbon operator on the cylinder vacuum state. For this, consider R 00 , which is the ribbon operator with vanishing quasiparticle excitations. This should correspond to the unit element of the Drinfeld center fusion algebra, and therefore under the fusion product satisfy 25 Rξ ξ R 00 = R 00 Rξ ξ = Rξ ξ . Now, by noticing that R 00 is proportional to the vacuum on the cylinder, i.e. R 00 = D = DO 00 00 , (5. 33) we see that the natural requirement Rξ ξ R 00 = Rξ ξ leads to Rξ ξ O 00 00 = O ij s 1 s 2 . This simple argument shows that the definition (5.30) of the ribbon operator has the desired property, namely that it creates an orthonormal basis state by acting on the vacuum. In the next section we are going to make this statement more precise by defining the action of the ribbon operators via the fusion of quasi-particle excitations. VI. THE THREE-PUNCTURED SPHERE We have so far focused on the cylinder in order to understand the nature of the quasi-particle excitations and the construction of the ribbon operators. Moreover, we have been able to derive the action of a ribbon operator on the simplest state which is the vacuum for the cylinder. However, we eventually wish to define the action of ribbon operators on general states and not only on the vacuum. This means in particular that we have to consider arbitrary states on the sphere with more than two punctures. Intuitively, if we imagine two nearby punctures carrying arbitrary excitations, we would like to replace these two punctures by a single puncture in a new fused state. For this, we should use the local equivalence relations to define a new boundary going around the two punctures and traversed by only one strand. In addition, we should require that the fused state behaves in the same way as the original state on the two punctures, as long as we probe the region outside of the new boundary. This applies in particular to the behavior under the operation of crossing a strand. Topologically, one way of replacing two punctures by a single puncture is equivalent to gluing a three-punctured sphere and filling-in the resulting hole. An "inverse" procedure for replacing two punctures by a single puncture is to cut the manifold Σ p in a region going around the two punctures. This has the effect of removing a three-punctured sphere and of leaving the manifold Σ p with a new puncture in place of the two old ones. This reasoning shows that it is sufficient to focus on the three-punctured sphere in order to understand the fusion process. One possible choice of basis for the graph Hilbert space on the three-punctured sphere is given by the generalization (4.2) of the Q basis. However, just like for the Q basis itself, the states (4.2) do not transform with the half-braiding tensor when they cross a strand, and therefore do not posses a stability property. For this reason, one should consider the generalization of the O basis to a higher number of punctures. A possible choice of orthonormal basis would be to consider a tree-like basis formed by O states. But this would also not transform nicely when the quasi-particle ends cross a strand, since it would lead to two half-braiding tensors (in the three-punctured case). Therefore, following [59], one should consider the so-called fusion basis given by This can of course easily be generalized to the case of p ≥ 3 punctures. As shown in the trace inner product calculation (C15), these states are indeed orthonormal. Using a calculation similar to (5.23), it is also straightforward to determine the behavior of these basis states when they cross a strand. This is given by which indicates that, from the point of view of the half-braiding, the two quasi-particle ends on the right behave like one fused quasi-particle. This justifies the name "fusion basis" and the use of these states to define the fusion of two puncture quasi-particles into one. This is what we are now going to do explicitly. As discussed above, we would like to describe the fusion process as the cutting of a threepunctured sphere away from the manifold, or equivalently as the gluing of a three-punctured sphere followed by a filling of the resulting hole. To make this concrete, let us consider a state with two neighboring punctures carrying excitations. In order to describe the fusion as the removal of a three-punctured sphere by cutting around these two punctures, we should first locally rewrite the state in the S 3 fusion basis O. The choice (6.1) is however not unique, and there exist different fusion basis given by arbitrary orientation choices for the crossings. For our purposes, it will be convenient to use the following rewriting formula to a particular fusion basis: Other possible choices are given in appendix F, and will lead to different Clebsch-Gordan coefficients for the fusion. Since we are only interested in what happens in the neighborhood of the two punctures, we have here deliberately left the strands r 1 and r 2 open in order to signify that there could be an arbitrary complicated graph continuing in this direction. Now, using formula (3.31) from right to left and sliding the vacuum loop over the graph, we can further rewrite the state (6.3) as For the moment, we have simply used the local graphical equivalence relations to obtain a strict equality between the left-and right-hand sides of ( 6.4). With this rewriting, we are now ready to define the fusion operation. First of all, one can verify with the trace inner product that G is a state of unit norm. It is therefore meaningful to try to replace it by another state of unit norm, namely 6.6) which can be seen as being obtained by cutting away a three-punctured sphere in (6.5) at the level of the strand u. Furthermore, this fused state has the same behavior (given by the half-braiding tensor) as G when crossing a strand. However, notice that the label u becomes free and unspecified in (6.6), while it does initially not appear in (6.5) by virtue of (3.31). Therefore, the cutting of the three-punctured sphere at the level of the strand u should be done in all possible channels which are compatible with the strands s 1 and s 2 arriving at the two punctures being fused. We thus wish to sum over the label u and to insert a fusion coefficient δ s 1 s 2 u , i.e. to consider ( 6.7) However, if we want to define the fusion by cutting away and replacing several states (because of the sum over u), an additional natural requirement is that the total norm of the states which are removed should be one. Because of the fusion rule, we have that which shows that we should rather replace ( 6.9) This argument motivates the definition of the following fusion operation: and where ⊗ = denotes the fact that the equality comes from the fusion of the two puncture states. In this algebraic expression we have made explicit the coupling rules which would otherwise only be seen graphically on (6.5). As we are going to see in what follows, this definition of the fusion operation leads to a consistent gluing of the ribbons. Before going on, notice that (6.10) is sufficient to describe all possible fusion processes. Indeed, even if one of the quasi-particle ends being fused has a different orientation, one can use the transformation 6.12) or its inverse to go back to the left-hand side of (6.10). We are now going to give examples of this fusion rule by computing the action of ribbon operators. VII. RIBBON OPERATORS This section is devoted to the precise definition and study of the action of the ribbon operators A. Definition of (left) ribbon operators We want to use the fusion operation introduced above in order to define the action of a ribbon operator as the fusion of its quasi-particle ends with the excitations of the punctures it is acting on. First, a choice of convention has to be addressed. The fusion which we are considering here is the fusion in the Drinfeld centre category, and the order of the factors plays a very subtle role [70]. We therefore have to decide whether the left or right factor is given by the ribbon's quasi-particle ends or the excitations of the punctures. First, note that since a ribbon Rξ ξ has an orientation, it is meaningful to refer to its source and target quasi-particle endsξ and ξ. Referring to this orientation, i.e. with the ribbon pointing upwards, we will define the action of a ribbon from the left as follows: In the fusion process of the target quasi-particle end ξ with a puncture excitation χ, we will use ξ ⊗ χ with the ribbon target ξ providing the left factor. For the fusion process of the source quasi-particleξ with a puncture excitation γ, we have to rotate our viewpoint by 180 • , and consider an ordering γ ⊗ξ. In this way, the ribbon with its ends is situated on the left of an auxiliary strand connecting the two punctures on which the ribbon will act. The precise definition of the action of a left ribbon operator is as follows: • If one wants to use a ribbon operator to create an excitation at a location in Σ p where there is initially no puncture, one first has to embed the state under consideration into the Hilbert space H Σ p+1 using (4.14), i.e. to add a puncture in the vacuum configuration at the desired position. • Use the representation (5.30) for the ribbon as a state around two auxiliary punctures. Place the two auxiliary punctures of the ribbon near the intended source puncture in Σ p . If the marked point of the source puncture with excitation γ in Σ p is pointing upwards, the source puncture of the ribbon should be placed to the left side of the marked point such that the marked point of the source puncture also points upwards. The state γ around the source puncture of Σ p and the source punctureξ of the ribbon should be brought into the form depicted on the left-hand side of (6.10), with the upper puncture there representing the excitation γ of Σ p and the lower puncture the (source) ribbon punctureξ. • One moves the target puncture of the ribbon along the intended ribbon path to the intended target puncture in Σ p . For each strand being crossed over, one applies the sliding rule ( 5.27) which involves the half-braiding tensor. We can graphically resolve the half-braiding tensors using (5.32): the ribbon ij crossing strands a, b, c, . . . can be replaced by a double strand labelled by i and j, where the strand i over-crosses the strands a, b, c, . . ., and the strand j under-crosses the same strands. Graphically, this is given by (7.1) • For the fusion of the target puncture ξ of the ribbon and the target puncture χ in Σ p , we move the ribbon target puncture to the left (with respect to an upward orientation of the ribbon) of the puncture χ in Σ p . The marked points of both punctures should now be pointing downwards. Again, we then bring the state around the target puncture in Σ p and the target puncture of the ribbon into the form depicted on the left-hand side of (6.10). This time, the upper puncture in (6.10) is representing the target puncture of the ribbon, and the lower puncture is representing the target puncture in Σ p . • Finally, one applies the replacement rule (6.10) to fuse the ribbon's quasi-particle ends with the punctures they are meant to act on. One then obtains new fused punctures whose location agree with that of the initial punctures in Σ p which have been acted on. Notice that for a given H Σp the action of an open ribbon operator depends on three different data, namely: • The quasi-particle excitationsξ and ξ carried by the ribbon; • The quasi-particle excitations γ and χ carried by the punctures with whichξ and ξ are fusing; • An equivalence class of paths connecting the two quasi-particles γ and χ and the state in a neighborhood of this path. The state-dependent equivalence class of paths is defined by the sliding property (5.28) for the ribbons, which allows to deform the ribbon's path as long as it does not cross a non-vacuum puncture. B. Open ribbon operators The first interesting case to consider is the action of a ribbon operator on the cylinder vacuum state. We have already argued above that this should reproduce an O basis state, but this result can now be proven using the fusion rule. Graphically, the action of a ribbon on the cylinder vacuum state is given by where we should use the fusion rule (6.10) to fuse ξ from above with the right vacuum puncture, and then (6.10) rotated by 180 • to fuseξ from below with the left vacuum puncture. It is straightforward to compute that, for the fusion of ξ for example, we have can be composed to yield "longer" fluxes along so-called co-paths (i.e. paths in the triangulation itself) [25,26], open ribbons can also be glued with an operation which we now define. C. Gluing of ribbons Open ribbon operators can be glued along punctures, which allows us to represent a "longer" ribbon as the gluing of "shorter" ribbons. Since a ribbon operator leads to excitations only at its end, we have to project the glued ends of the shorter ribbons back to the vacuum state. In fact, the considerations we make here will show the cylindrical consistency of the ribbon operators in the sense explained in the last item of section II A. The gluing can be represented graphically as and here we are going to describe the operation depicted by the dashed line around the quasiparticle ends ξ andξ. For this, consider the situation in which we first fuse a vacuum puncture with the target quasiparticle ξ = ij of a ribbon operator, and then fuse the resulting excited puncture with the source quasi-particle of a second ribbon operator labelledξ =īj. Looking at the two quasi-particles ξ and ξ being fused on the left-hand side of (7.5), and following our convention which requires to place the marked points on the same side, we are led to considering the fusion The gluing operation is now defined on this resulting fused puncture by the following additional steps: • One first projects the state around the fused puncture onto the Gauss and flatness constraints using the projections B n and B p respectively. This amounts to projecting the puncture onto the vacuum state, and therefore the resulting state looks locally like a long ribbon which passes by this vacuum puncture. • One then contracts the two "ribbon tail" labels s 1 and s 2 by summing over s 1 = s 2 . The projection onto the Gauss constraint of the right-hand side of (7.6) forces u = 0, and therefore a = b and s 1 = s 2 as well. Graphically, it leads to and therefore one ends up with a vacuum puncture. Now, one should remember that the quasi-particles being glued on the right-hand side of (7.6) come from the ends of the two ribbons in (7.5). Since the ribbon operators are defined in (5.30) with a sum over dimension factors, these have to be taken into account here in order to get the final result. It will be convenient to attach these dimension factors to the source quasi-particles of the ribbons. In the present case, we therefore only write the factor coming from the endξ which is below on the left-hand side of (7.6). Also, one should implement the sum over s 1 = s 2 . Putting this together leads to the coefficient u|i, i, j, j, r 1 , r 2 , s 1 , s 2 ] δ u0 δ ab δ a0 δ t0 δ r 1 r 2 , ( 7.9) which, using leads around the puncture to the final glued state Remembering now that the source quasi-particle attached at the left end of the strand r 1 carries the factor v r 1 /(v i v j ), one therefore sees that the gluing of two open ribbons along a vacuum puncture is equivalent to having a long open ribbon passing by the vacuum puncture. This is what is represented in (7.5). The fact that the gluing of two ribbons along a vacuum puncture gives a longer ribbons confirms our definitions for the fusion process which underlies the action of the ribbons. It also allows us to represent a long ribbon, which is a priori defined on some coarse Hilbert space H p , as a combination of shorter ribbons that are connecting additional vacuum punctures for states in a refined Hilbert space H p+q resulting from an embedding of states in H p . This notion is important in order to define a cylindrical consistent family of (ribbon) operators which agrees on states that are connected by the embedding maps. In this way, the action of a (long) ribbon operator coming from some "coarser" Hilbert space H p is uniquely defined for the set of states in a "finer" Hilbert space H p+q which results from the embedding maps. Note that the ribbon operators can be extended in various ways to the full "finer" Hilbert space H p+q . For example, in the computation above we can choose to represent the refined ribbon as the glued ribbon which passes by the puncture to the left or to the right. This feature of having different possible extensions (without an obvious canonical choice) also appears in the BF representation and is discussed in detail in [25,26]. D. Closed ribbon operators The gluing of open ribbon operators now allows us to define closed ribbons. For this we simply have to glue, following the prescription of the previous subsection, the source and target quasiparticles of the same ribbon. Using the result (7.11), this can be done by closing the strand labelled r 1 (which we can relabel r). If the ribbon (and therefore the strand r) does not enclose a (nonvacuum) puncture, the loop can be evaluated to v 2 r . Therefore, a closed ribbon which does not enclose an excited puncture is given by where the dotted line around the two quasi-particle ends indicates that they should be fused and projected with the operators B p and B n , and where we have then omitted the puncture in a vacuum loop produced by the fusion. This equation is a generalization to ribbons (and therefore to the objects of the Drinfeld centre) of the evaluation (3.5) of a closed strand. Let us now consider the more general case of a closed ribbon operator acting on an excited state O basis state. Using the half-braiding (5.32) to resolve the crossing between a ribbon and a strand, we get that (7.14) In the last step of this calculation, we have used an argument identical to that below (3.15) in order to detach from the tail of the O basis state the graphical evaluation which appears as a pre-factor. We have therefore shown that the O basis states (5.18) are eigenstates of closed ribbon operators R ij , with eigenvalue given by A proof of this graphical evaluation is given in (C16). This result does also hold for the fusion basis states (6.1) which are a generalization of the O basis states. Now, one can also consider the braiding of ribbon operators. If a ribbon ab is crossing over a ribbon ij, it means that we first apply the ribbon operator ij and then the ribbon operator ab. By using the gluing of ribbons along vacuum punctures introduced above, we can have different parts of a ribbon acting in different orderings. This allows us to consider two intertwined closed ribbons, and, as shown in (C17), we have that We have thus defined the S-matrix of the Drinfeld centre S (ij)(ab) = D 2 s (ij)(ab) and shown that it factorizes as S (ij)(ab) = S ia S jb (this holds more generally for modular fusion categories). As a remark, note that we can use the S-matrix of the Drinfeld center in order to define the new closed ribbon operators where the second equality comes from the fact that the S-matrix of SU(2) k is real and symmetric. With this definition, we obtain that R ab O kl cd = δ ac δ bd O kl cd (7.18) and R ab R cd = δ ac δ bd R ab . (7.19) These new operators are the so-called projective ribbon operators, whose properties were investigated (although in the group representation and for finite groups) in [102]. As a further remark, note that in the calculation of the eigenvalues of the closed ribbon operators and of the S-matrix we could have also used the half-braiding tensor Ω in order to resolve the crossings between a ribbon and a strand. By comparing the result of this alternative calculation (which is given at the end of appendix C) with that obtained above, we find the following contraction identity for the half-braiding tensors: s ia s jb . (7.20) Finally, let us point out that in a context where one is interested in the physical Hilbert space for (2 + 1) dimensional gravity with a cosmological constant, the closed ribbon operators represent the Dirac observables of the theory. The physical Hilbert space contains only the states satisfying all the flatness and Gauss constraints, and it is only non-trivial for surfaces of higher genus or of non-trivial topology because of punctures (for which the flatness and Gauss constraints then do not need to hold). Dirac observables are operators which leave invariant the subspace of states satisfying the constraints. E. Wilson loop and line operators We have already encountered the Wilson loop operator in section V A, where it has been applied to a pure curvature state. Since these pure curvature states do not have an outgoing strand, we did not have to decide whether to place the Wilson loop by crossing over or under the strand. The previous discussions show that the ribbon Rξ ξ = R (ī0)(i0) implements an over-crossing Wilson line in the representation i. Likewise, with i = 0 one finds that a ribbon R (0j)(0j) agrees with the action of an under-crossing Wilson line in the representation j. A ribbon operator R (ī0)(i0) acting on two vacuum punctures generates a torsion excitation given by a strand i over-crossing the vacuum loops and all the other strands in-between the two punctures, i.e. Similarly, the ribbon R (0j)(0j) connects the two punctures with a strand j under-crossing the vacuum loops and all the other strands in-between the two punctures. Therefore, the ribbon operators R (ī0)(i0) and R (0j)(0j) generalize the Wilson line operators by adding the information about whether they cross pre-existing strands from above or from below. As an example, we can consider the action of a closed ribbon R ξ = R i0 on a cylinder basis state of the type O a0 aa . According to the general result (7.20), the O a0 aa state is an eigenstate with eigenvalue λ i0,a0 = 1 v 2 a s ai . (7.22) This can also be checked with a direct calculation by applying an over-crossing Wilson loop to O a0 aa and using the sliding property (3.28) for the vacuum line. Therefore, we see that the O a0 aa basis states have non-trivial curvature when measured with respect to over-crossing Wilson loops. On the other hand, a closed ribbon R 0j acting on a cylinder basis state O a0 aa results in an eigenvalue which again can also be checked by a direct calculation. Hence, the states O a0 aa have vanishing curvature when measured with respect to under-crossing Wilson loops (since with the normalization of (4.13) the eigenvalue is one). F. Flux operators In the SU(2) BF representation, the exponentiated flux operators act in the group representation by right or left multiplication, and generate curvature excitations. We have already seen in section V A that the states O jj 00 carry (only) curvature, and that the spin label j measures the amount of curvature (since in the limit of large k the spin j is proportional to the class angle). On the cylinder, such states are generated by ribbons Rξ ξ withξ = (j, j, 0) and ξ = (j, j, 0), which we can denote simply by R j . The states O jj 00 do not violate the Gauss constraints, and thus correspond to (gauge) group-averaged curvature states dhδ(h −1 gh). We can therefore conclude that the ribbon operators R j act as exponentiated flux operators 26 . In the group case, the action of the exponentiated flux operator by multiplication from the right or left does lead to a Gauss constraint violation for the source or target node of the link (on which the flux acts) respectively. The exponentiated flux, and with it the Gauss constraint violation, is however usually parallel transported to a special node, called the root. In the quantum group case, we can also consider ribbons Rξ ξ with ξ = (j, j, 0) andξ = (j, j, s), i.e. ribbons which have a vanishing tail for the target puncture but a non-vanishing tail s = 0 for the source puncture. We therefore also have operators which lead to curvature excitations and a violation of the Gauss constraint at the source puncture. One can also transport this Gauss constraint violation towards another (root) puncture. To this end, one has to apply (a linear combination of) ribbon operators corresponding to open Wilson lines, and finally project the source puncture with the Gauss constraint projector B n . Besides open ribbon operators R j corresponding to exponentiated fluxes leading to curvature excitations, we can also consider closed ribbon operators R cl j . In the SU(2) BF representation, these correspond to the following operators: First of all, one can add up flux operators associated to a so-called co-path, which is a connected set of triangulation edges. The fluxes associated to the edges of the co-path are first parallel transported to a common frame, and then added up. The corresponding operator leads to curvature excitations only at the endpoints of the co-path. Now, by choosing a closed co-path, we could conclude that we have obtained the analogue of a closed ribbon operator. However, the closed co-path operator can still lead to a Gauss constraint violation (at the node which defines the common frame) and to a flatness constraint violation at the face where the co-path starts and ends. To obtain a similar operator as in the quantum group case, we do have to apply the projector to the flatness constraints for the face in question and to also apply a gauge group averaging at the node in question. The resulting operators for the group case will be discussed more deeply in [102]. We have seen in (7.14) that the quasi-particle excitations ab are eigenstates of the closed ribbon operators R ξ = R ij with eigenvalues λ ij,ab . Moreover, the eigenvalues for these closed ribbon operators factorize in two values associated to over-and under-crossing Wilson loops respectively (which can be seen by setting respectively i and j equal to zero in (7.14)). These over-and undercrossing Wilson loops encode both the curvature and the torsion (or spin) content of the excitation. Torsion is connected to a Gauss constraint violation, and is generated by open ribbons R (ī0)(i0) and R (0j)(0j) corresponding to open over-and under-crossing Wilson lines. Curvature, on the other hand, is generated by open ribbons Rξ ξ (which, depending on the "tails", can also lead to a Gauss constraint violation). The open ribbons Rξ ξ can be also obtained from applying R (ī0)(i0) and R (0j)(0j) one after the other. We therefore see that flux operators (and more generally all ribbon operators) can be obtained from over-crossing and under-crossing Wilson lines. On the other hand, via fusion we can obtain an excitation including torsion out of two "pure curvature" excitations. This is encoded in the fusion rules for the Drinfeld centre, which are simply given by the double of the fusion rules for the category of representations of SU(2) k , i.e. (7.24) where one should understand the implicit imposition of an overall set of admissibility conditions for every triple (two "ingoing" and one "outgoing" or "fused") of spins. These fusion rules can be read from the fusion basis (6.1). From this expression, one can see that the fusion of two excitations iī and iī includes the components i0 and 0ī. A closed ribbon encircling these two "pure" curvature excitations will therefore detect components corresponding to a torsion excitation. In the SU(2) BF case, this curvature-induced torsion can be explained by the unavoidable parallel transport involved in the definition of the fluxes in the coarser region, and therefore can be measured by fluxes associated to closed co-paths [25]. There is an alternative explanation in terms of quasi-particle excitations, which is that two spinless particles can have an overall non-vanishing spin. Curvature-induced torsion is one of the reasons why it is advantageous to allow also for torsion degrees of freedom at the punctures. Indeed, this leads to a stability of the state space under coarse graining, and the fusion basis makes this stability explicit. Such a stability is rather difficult to implement with (gauge-invariant) spin network states (see the discussion in [102,103]). VIII. CONCLUSION AND PERSPECTIVES In the work [24][25][26], we have shown that the holonomy-flux algebra of LQG admits a representation based on a kinematical vacuum state which is peaked on flat connections. This leads in turn to a quantum theory which is inequivalent to that based on the AL vacuum, and the properties of this new representation have been recalled in II A. As argued in the introduction, since the flat curvature vacuum and its excitations can be seen respectively as a physical state of BF topological field theory and curvature defects therein, we are naturally led to wonder whether there exists a deeper relationship between quantum representations of LQG and extended TQFTs. In the present work, we have taken a first step towards this understanding by explaining how the extended TV TQFT, when seen as a curvature vacuum together with its excitations, does lead to a representation of the holonomy and flux operators of LQG. More precisely, we have focused in this work on (2+1)-dimensional Euclidean gravity with a positive cosmological constant, which is a topological field theory. When focussing on the description of the kinematics of the theory, i.e. when looking for a representation of the kinematical observable algebra 27 encoded into the holonomy-flux algebra, we also have to consider excitations on top of the topological field theory. These are incorporated in the framework of so-called extended TQFTs. By choosing the vacuum state underlying our representation to be given by the TV TQFT, we are forced to work with the category of representations of SU(2) k and the tools of graphical calculus which we have summarized in section III. We have shown in section IV that it is possible to define on punctured manifolds vacuum states which are invariant under the action of projection operators implementing the vanishing of curvature (in excess to the homogeneous curvature forced by the cosmological constant) and torsion. These projection operators implement the same constraints as the path integral defined by the TV model. This defines the kinematical vacuum state peaked on gauge-invariant "flat" configurations. Then we have explained in section V how to characterize the (quasi-particle) excitations on top of the vacuum state. These are described by modules (or representations) of the tube algebra, and are labelled by elements of the Drinfeld center. We have shown in section VI how to define the fusion of the quasi-particle excitations, and explained how this can be used to define the action of ribbon operators. Finally, we have studied in details the properties and the action of open and closed ribbon operators in section VII. The open ribbons act as creation operators which are the analogue of (but generalize) the exponentiated flux operators of the SU(2) BF representation, and which act on the vacuum state by creating, depending on the spin labels or the ribbon's quasi-particle ends, torsion or curvature excitations. This results provides a new concrete realization (the first one being the construction of the SU(2) BF vacuum) of the conjectured general result that the kinematical vacua and their excitations can be understood in terms of TQFTs with defects [29]. Moreover, the framework developed here presents several advantages compared to the previous (SU(2) BF and AL) representations of the holonomy-flux algebra. We list a few of these important features below. • First of all, this work establishes the contact between tools of extended TQFTs and the study of representations of quantum gravity and realizations of quantum geometry. We believe that this mathematical framework will facilitate the understanding of the structure of the vacua in quantum gravity, as well as the investigation of the continuum limit and the phase structure for spin foams and group field theories. • Because the excitations on top of the TV vacuum correspond to local curvature and torsion degrees of freedom, this framework enables for a natural description of the coupling of massive spinning point particles to (2 + 1)-dimensional gravity with a positive cosmological constant. In addition, we have seen that the structure of the Drinfeld center labelling these degrees of freedom does appear naturally from simple stability considerations for the quasiparticles. This opens the possibility of understanding in a deeper manner the inclusion of particles in both the canonical and covariant quantizations of three-dimensional gravity with a cosmological constant. • A major advantage of the present construction over the SU(2) BF representation is that it comes with a built-in finiteness (i.e. one does not need to resort to a Bohr compactification like in the group case). The Hilbert spaces H p describing excitations located at a fixed number p of punctures are finite-dimensional because of the finiteness of the category of representations of SU(2) k . Thus, the spectra of geometrical operators (which can be restricted to a given H p ) are discrete, as opposed to the continuous spectra which appear in the SU(2) BF case [26] for operators built from the fluxes, or in the AL case for operators built from the holonomies. • The fusion basis describes states which do posses both curvature and torsion excitations and the punctures and which can be naturally fused together. The state space spanned by these states is therefore stable from the point of view of the creation of curvature-induced torsion, since it encompasses all possible degrees of freedom from the onset. This is a major advantage for the study of coarse graining of such states, as opposed to the study of usual gauge-invariant spin network states. In the case of a gauge theory with classical groups, one can also construct the fusion basis using the holonomy representation [102]. This construction provides also an interpretation of the charge labels appearing in the quantum deformed case as encoding mass and spin. Although the present construction was carried out in the restricted context of Euclidean threedimensional gravity with a positive cosmological constant, the results which we have obtained and the tools which have been used can potentially be generalized to various other cases of interest. We list below some developments which we postpone to further work. • The construction and the study of the vacua and excitations should be extended and made systematic for arbitrary spacetime signature, dimension (three or four), and sign of the cosmological constant. There are two main challenges in this task. First of all, the present formalism has to be adapted to the case of a category which is not finite. This would for example be the case for the Euclidean theory with a negative cosmological constant, i.e. for U q su(2) with q ∈ R. Note that the SU(2) BF representation [25,26] is in this sense not finite and can give some clues about how to deal with this. When going to the four-dimensional case, we expect that the SU(2) BF representation constructed in [25,26] will provide important hints about how to formulate possible quantum deformations. In the four-dimensional Euclidean theory with a positive cosmological constant, one can expect that the Crane-Yetter TQFT [44,45] will provide the structure of the vacuum, while the excitations will be described by an extended Crane-Yetter TQFT, whose precise definition needs still to be completed. One promising possibility to obtain a (3 + 1)-dimensional framework from a (2 + 1)dimensional one is to use a Heegard splitting to encode a three-dimensional triangulation through a two-dimensional so-called Heegard surface, or more precisely a Heegard diagram [104]. In [104], this strategy has been applied to the BF representation with classical gauge group. This work shows that the ribbon operators of the (2 + 1)-dimensional theory do map to surface operators which have been constructed in [25,26]. Therefore, in order to obtain a quantum deformed (3 + 1)-dimensional representation, one should apply the same strategy to the TV representation constructed here. • The representation we built for the (2 + 1)-dimensional quantum geometrical operators can be also useful for the four-dimensional theory. One example is the quantization of the horizon geometry of black holes incorporating the "isolated horizon condition" (see [105,106]). Another example is given by the quantization of holographic screens [47]. • It would be interesting to compare in more detail the relationship between the present approach to three-dimensional quantum gravity and the other schemes such as the Chern-Simons quantization and the combinatorial quantizationà la Fock-Rosly [107]. At the level of the path integral, the relation between the TV model and Chern-Simon theory has been rigorously established in [108,109] (for the relation between three-dimensional quantum gravity without cosmological constant described by the Ponzano-Regge model and Chern-Simons theory, see [86]). In the context of canonical quantization, the work [85] relates the observable algebra in the Fock-Rosly quantization to the holonomy-flux algebra of loop quantum gravity. This has been worked out in the context of the AL representation, but the relationship is even tighter in the BF representation. It remains to generalize these arguments to the case of a non-vanishing cosmological constant. • By comparing the framework presented here to the usual spin network basis, one might wonder about the fate of the magnetic indices (attached to a choice of basis in a given representation space V j ) which one would expect to have at the open ends of the strands. In fact, here we were following the setup of so called (non-extended) string-nets [58,70], in which such magnetic indices do not appear. For the case of proper groups, so-called extended string-nets which include magnetic indices have been defined [110,111]. This setup is needed in order to match the extended string-nets with a description based on group variables (also known as Kitaev models [88]), as will be further explored in [102]. The additional magnetic indices provide local information, i.e. can be manipulated by local operators. This is different from the quasi-particle quantum numbers ij which can be changed by ribbon operators that are rather quasi-local. In the case of quantum groups, the definition of extended string-nets has not been completed yet, but it has been argued to exist in [111] based on a so-called weak Frobenius fiber functor [112]. This would then also allow for the definition of a generalized Fourier transform to a (generalized) group picture. The reason why the quantum group case (at root of unity) is much more involved is due to the fact that the tensor product of two representations can lead to an indecomposable part which needs to be factored out. This is mirrored in quantum dimensions which do not need to be natural numbers. • The ribbon operators constructed in this work replace the holonomies and fluxes which encode the quantum geometry. It remains to construct out of these ribbons the explicit geometrical operators, e.g. an operator giving the length of a geodesic, or an operator giving the dihedral angle (e.g. the extrinsic curvature) attached to an edge in the triangulation, and to study their spectra (for the BF representation, area and length operators have been constructed in [26]. • Several works attempt at imposing the dynamics of three-dimensional quantum gravity with a cosmological constant starting from a kinematical set-up, which so far is based on the AL representation and does not incorporate a quantum deformation (see [65] for a completion of this program at the classical level). It is hoped that this "exercise" will give important insights about how to impose the dynamics in the four-dimensional case. It might however be easier (due to the finiteness of the underlying fusion category) to start with the framework presented here and adapted to a cosmological constant Λ, and to impose the dynamics defined by a larger cosmological constant Λ > Λ. This would give interesting lessons on how (curvature) excitations condense to lead to a new vacuum state. is a root of unity. With this deformation parameter, we can introduce the quantum numbers [n] = q n/2 − q −n/2 q 1/2 − q −1/2 = sin π k + 2 n and define [0] = 1. This then defines the quantum dimensions d j = [2j + 1], and in particular, using the notation v 2 j = (−1) 2j d j , the so-called total quantum dimension Here Z is the evaluation of the path integral of SU(2) k Chern-Simons theory on the three-sphere. which is therefore defined to be zero when the triple is non-admissible. The Racah-Wigner quantum 6j symbol is then given by the formula i j m k l n = ∆(i, j, m)∆(i, l, n)∆(k, j, n)∆(k, l, m) where the sum runs over max(i + j + m, i + l + n, k + j + n, k + l + m) ≤ z ≤ min(i + j + k + l, i + k + m + n, j + l + m + n). Finally, the F -symbols are defined as The explicit expression for the R-matrix of SU(2) k is from which one can see that R ij k = R ji k and R i0 i = 1. Now, the S-matrix has entries defined via the R-matrix as and using expression (3.24) we get the explicit expression (A11) With this notation, we see that s ij corresponds to the evaluation of the Hopf link in the threesphere, and we understand the normalization coefficient S 00 = Z = D −1 . The S-matrix satisfies the properties and can be used to compute the fusion coefficients via the famous Verlinde formula 28 When written in the form the Verlinde formula shows that the S-matrices diagonalize the fusion matrices. Appendix B: Basis states for a two-surface of genus g ≥ 1 In this appendix, we briefly discuss the choice of basis for the graph Hilbert space on a punctured surface of genus g ≥ 1. Let us start by considering the case of a torus with no punctures. A minimal graph for the torus has two three-valent nodes and three strands, and covers the two independent non-contractible cycles. A possible choice of minimal graph is represented on figure 6. By analogy with the construction of the basis for the punctured sphere, one could a priori assume that a basis for the torus is given by all the admissible spin colorings of this minimal graph. However, this set of states is over-complete 29 since the graph features a face which covers the entire surface of the torus 28 The S-matrix is real and symmetric in the case of SU(2) k , but we indicate the Hermitian conjugate for the sake of generality. 29 For a fixed level k, the dimension of the graph Hilbert space on the torus without punctures (known as the ground state degeneracy) can be shown to be dim H T = (k+1) 2 [95]. For k = 1, one finds that there are indeed (1+1) 2 = 4 admissible colorings of the minimal graph of figure 6. However, for k = 2, this formula gives dim H T = 9 while one can check that there are 10 admissible colorings of the minimal graph of figure 6. As we mentioned, this is due to the fact that not all these colorings are independent. The colorings of the minimal graph forms an over-complete basis. (this face corresponds to the dual vertex which results from seeing the torus as the gluing of a parallelogram). One therefore has to impose on the states a flatness constraint for this face. This results in the fact that some linear combination of states is equivalent to other linear combinations. There exists a procedure which enables us to get an independent set of states. This is borrowed from Dirac's procedure to quantize constrained systems, and it consists in considering a onepunctured torus without violations of the Gauss constraint. Then, the set of states based on colorings of the graph of figure 7 does indeed form an independent basis. Therefore, we can also define an inner product by declaring these basis states to be orthonormal. The resulting Hilbert space would be equivalent to a so-called kinematical Hilbert space in Dirac's constrained quantization procedure. Note that this kinematical Hilbert space is finite-dimensional. One has now to impose the flatness constraint for the single face of the graph of figure 7. The solution space (i.e. the null space of the constraints) defines the physical Hilbert space which we are looking for. The inner product on this physical Hilbert space is just the one induced from the kinematical Hilbert space, which is well-defined since we are in a finite-dimensional context. This procedure generalizes to all surfaces of higher genus. Any such surface can be triangulated with at most one vertex, which results in one face for the dual graph. The basis for the kinematical Hilbert space (in the above-mentioned sense of Dirac quantization) is based on the graph dual to this triangulation with one vertex, and one has to impose once again a flatness constraint in order to get to the physical Hilbert space. Higher genus surfaces with punctures are comparably less complicated. One starts with the basis based on the graph dual to the minimal triangulation with one vertex. For states on a onepunctured surface, which also include Gauss constraint violation, we add one strand going from the puncture to any strand of the graph. For any additional puncture, we then add the same Q-shaped piece of graph as in the case of the punctured sphere. States based on these graphs (i.e. states obtained by allowing all admissible colorings) define a basis of the graph Hilbert space on the punctured surface. This defines a basis which is analogous to the Q basis for the punctured sphere. One can then also define basis states which have similar properties as the Ocneanu (or fusion) basis. For this discussion, we refer the reader to [59]. Proof of the sliding property (D6). On the one hand, we have that Therefore, in order for the sliding property (D6) to hold, a necessary and sufficient condition is that Using first the hexagon identity (3.11) and then the orthogonality relation (3.9c), one can show that this relation does indeed hold, thereby ensuring that the sliding property is true. Proof of (4.23) in the case s = 6. Let us denote the flatness projection operator acting on a face by Then we have that F n 2 l 1 n 1 j 1 kj 2 F n 3 l 2 n 2 j 2 kj 3 F n 4 l 3 n 3 j 3 kj 4 F n 5 l 4 n 4 j 4 kj 5 F n 6 l 5 n 5 j 5 kj 6 F n 1 l 6 n 6 (C15) Proof of (7.15). Proof of (7.20). By using the half-braiding (5.32) to resolve the crossing between a ribbon and a strand, we get that (C18) Now, the ribbon quasi-particles ξ andξ can be fused on the right of the puncture, leading to (C19) The graph on the right-hand side of this equality is noting but the stacking Q ep tt × O ab tu of two cylinder basis states. By using the inverse of the basis transformation (5.19), which is given by which is here derived using respectively the pentagon relation on e, the relation ( 3.25), the pentagon relation on m, the replacement (3.25) again, and finally the pentagon relation on n. We have therefore shown that the eigenvalues λ ij,ab of the closed ribbon operators acting on O basis states can be written as as announced in (7.20). Appendix D: Commutativity of B n and B p In this appendix, we show by an explicit calculation that B n and B p commute when acting on the cylinder basis states Q. First, one has that n B n Q ij rs = δ ijr δ ijs δ r00 δ s00 r s j i = δ ij δ r0 δ s0 j , which implies that where in the last step we have used the fact that the graph is defined on the punctured sphere. Note that, since we are on the sphere, in the p-punctured case it is sufficient to act with B p on of freedom is that of the punctures, which are obtained by removing an infinitesimally small disk from the surface Σ and by marking a point on the resulting S 1 boundary (equivalently, one can also think of the punctures as points inΣ with a tangent vector attached [70] Note also that the punctures are embedded, which includes also an embedding of the marked points. This has the important consequence two Hilbert spaces H Σp and H Σ p which agree in the number and positioning of the punctures, but disagree in the marked points, do define different Hilbert spaces. Moreover, these Hilbert spaces are not refinements of each other and, with our definitions so far, there also does not exist a common refinement. Further below we will sketch a framework introducing extended Hilbert spaces H ext Σp , for which an arbitrary number of open strands can end at a given puncture. A diffeomorphism will in general also change the embedding of the punctures, and here we would like to briefly discuss the resulting action on a given Hilbert space H Σp . To this end, recall that diffeomorphisms can be thought of either as so-called active or passive transformations. These represent respectively a dragging of the geometrical points of the manifold, or a change of coordinates. In order to make the discussion clearer, let us focus on active diffeomorphisms, and fix once and for all a coordinate atlas for the punctured manifold (which we choose for simplicity to be a two-sphere). Two cases of the action of these active diffeomorphisms can already be discussed without ambiguity: the case of diffeomorphisms acting as the identity on the boundary of the punctures, and the case of diffeomorphisms rotating a puncture by ±2π. In the first case, when a (small) diffeomorphism acts as the identity on the boundary of the punctures, one is effectively considering a smooth deformation of the strands as in (3.4). Therefore, these diffeomorphisms do not change the states since we have defined these latter to be equivalence classes under such smooth deformations. Next, consider a rotation of ±2π of a puncture (also known as a Dehn twist). In this case, we still remain in the same Hilbert space since the location of the marked points before and after the diffeomorphism does agree. However, the rotations have a non-trivial action on the states. For example, when rotating the right puncture of a Q basis state by ±2π one obtains respectively where the product of Q states on the right-hand sides can be computed using (5.14). In particular, because of the result of this product, one can see that the Q basis states are not eigenstates of the action of Dehn twists. As pointed out in [59], the states which diagonalize the action of rotations where we have first used (C20) to express the Q states in terms of the O states, and then the stacking property (5.21). In these two calculations, the last equality has been obtained by using the explicit expression (A9) for the elements of the R-matrix. By doing so, we see that the eigenvalues are actually independent of the label s, and can be written in terms of the so-called topological phases θ i := R ii 0 * , in agreement with the results of [59]. Therefore, one can see that the action of a Dehn twist on an O state produces a phase factor, and that the two above results are consistent with each other since a rotation by ±2π followed by a rotation by ∓2π amounts to the identity. Finally, we should discuss the more subtle case of diffeomorphisms rotating a puncture (and therefore its marked point) by an angle ϕ ∈ (0, 2π). One way to treat this case is to consider an extension H Σp,s of the original Hilbert spaces H Σp , which consists in allowing for an arbitrary number s of marked points (and therefore incoming strands) at a puncture. By using refinement maps that add a marked point with a strand of spin label j = 0, we can then consider an inductive limit on the number (and position) of marked points, resulting in an inductive limit Hilbert space H ext Σp . The states obtained by rotating a given marked point by ϕ then belong to the same (much larger) inductive limit Hilbert space. Importantly, note that this extension of the Hilbert space does not change the classification of the excitations. At the mathematical level, this is due to a so-called Morita equivalence class of operator algebras. This means that the representations of these operator algebras lead to the same physical interpretation, namely that of a puncture as a quasi-particle carrying mass and spin. These different operator algebras are simply the ones obtained by considering the gluing of cylinder states with higher number of marked points. As explained in [60,113,114], the tube algebra [70][71][72] which we have considered in V B is the "smallest" representative in this Morita equivalence class. In order to see that it is meaningful to consider a gluing algebra with higher number of marked points, let us consider for example punctures with two marked points and two incoming strands. In this case, we can show, along the lines of (5.21), that the generalized O states also provide modules, i.e. satisfy a projector property under stacking. For this, let us consider the states (G5) These states can be glued by identifying two by two the marked points of the gluing puncture, and
37,688
2016-04-18T00:00:00.000
[ "Physics" ]
Forbush precursory increase and shock-associated particles on 20 October 1989 Strong interplanetary disturbances may affect cosmic ray protons tremendously with energies less than 1 GeV, increasing their intensity by hundreds of percents, but they are not so effective for protons of higher energies. This energy limit is crucial to understand processes of cosmic ray propagation and acceleration in the heliosphere. The Forbush pre-increase and the effect of shock-associated particles observed on 20 October 1989 illustrate the problem. This is a rare event, when the energies of shock-associated particles measured by the GOES-7 satellite spread continuously to the neutron monitor energies. The Forbush pre-increase could be attributed to a single reflection of galactic cosmic rays from the magnetic wall observed at 12:00 UT. It had a very hard spectrum with maximum energy of modulation more than 10 GeV. The spectrum of shock-associated particles was soft and their maximum energy was less than 1 GeV. The problem of shock acceleration versus trapping is discussed for the 20 October 1989 event. It is argued that the shock-associated particles were accelerated near the flare site and then propagated to the Earth inside the trap between two magnetic walls at 12:00 UT and 17:00 UT. Introduction The proton energy of about 1 GeV corresponds to the upper energy limit of satellite detectors and the lower energies of cosmic rays (CR) accessible by the neutron monitor (NM) network.Knowing details of modulation processes in this energy range, where the scale of modulation processes may differ tremendously, is very important to understand mechanisms of particle propagation and acceleration in the heliosphere.Several percents are characteristic for CR variations observed by NMs, but satellite detectors might register Correspondence to: A. Struminsky (astrum@izmiran.rssi.ru)variations of several orders simultaneously.Therefore, direct comparison of data sets obtained by NMs and particle detectors aboard satellites may clarify some physical limits of the modulation processes in the heliosphere. A passage of interplanetary shock is associated with two distinct effects in CR, one is the Forbush decrease observed by ground-based detectors and the other is the enhancement of lower energy CR in the interplanetary space.The last effect is often attributed to particle acceleration by interplanetary shocks and researchers refer to such increases as shockaccelerated particles.However, an ability of interplanetary shocks to accelerate protons for energies more than several MeV has not been proved yet (Lim et al., 1995;Kallenrode, 1997).In addition, CR particles can be simply trapped near the shock.I think that it would be more accurate to call them the shock-associated particles. Many Forbush decreases have a precursory increase, which is observed several hours before the shock arrives , but only in rare cases do the energies of corresponding shock associated particles appear to be continuum from spacecraft to NM energies.A reflection of galactic cosmic rays from the magnetic wall, which may be separated from the shock, is generally considered as a possible model of the Forbush precursory increase.Cane (2000) proposed that both effects might be caused by the shock acceleration in such rare cases. The solar activity in the NOAA active region 5747 on 18-19 October 1989, including the parent solar flare (25 S 09 E) of the ground level enhancement (GLE) of 19 October 1989 (the 43rd such event ever recorded, see the GLE database on http//helios.izmiran.rssi.ru/cosray/main.htm), caused large disturbances in the interplanetary space and the subsequent geomagnetic storm on 20 October 1989 (SSC 09:16 UT).The cutoff rigidity of cosmic rays changed dramatically at that time (Struminsky, 1992;Struminsky and Lal, 2001).Klein et al. (1999) reviewed neutral and charge particle emissions of the 19 October solar flare and provided evidences of prolonged energy release and particle acceleration during this event. The increase of shock-associated particles was detected by all energy channels of the GOES-7 proton detector on the background of the GLE 43 decreasing phase, with the maximum intensities even higher than those on 19 October 1989 during its main phase (a factor of 20 and 2 for the 8.7-14.5 and 110-500 MeV proton channels, respectively).The shock-associated particles were observed well after SSC (the first shock) and before the second discontinuity, which is visible in solar wind data at ∼17:00 UT.The Forbush precursory increase of 1-2% is clearly seen in data of lowlatitude NMs before SSC.Apparently, cosmic ray variations of solar and geomagnetic origin masked the precursory increase and the effect of shock-associated particles in data of high-and middle-latitude NMs.Corresponding plots of cosmic ray variations registered by NMs are easily available on http//helios.izmiran.rssi.ru/cosray/main.htm.The purpose of this work is to investigate the effects of the shock-associated particles and the precursory increase on 20 October 1989 in data from the Apatity and Moscow NMs using independent data sets obtained by space and ground based detectors.In order to estimate the precursory increase for mid-latitude NMs the solar and geomagnetic CR variations should be removed from the total CR variations.The NM response to the shock-associated particles can be estimated using the same technique as for solar cosmic ray variations, accounting for changes in the geomagnetic cutoff. The SMM coronagraph was not operating before 16:30 UT on 19 October, thereby leaving a large ambiguity in the identification of possible sources of the two interplanetary disturbances.In earlier papers (Bazilevskaya et al., 1994;Bavassano et al., 1994), the first shock was associated with the energetic 19 October solar event, so the second discontinuity should be attributed to the prolonged energy release during the decay phase of the flare.Note, that Cliver et al. (1990) first mentioned the second discontinuity and favored its association with the 19 October solar event.Cane and Richardson (1995) incorporated data of the 9-22 MeV proton intensity from the IMP-8 satellite, considering its enhancement at ∼17:00 UT as evidence of shock acceleration and, therefore, the existence of the second shock.They suggested that the second shock was more energetic than the first and so it is more likely to have been associated with the energetic 19 October solar event.In this case, in their opinion, another solar event on 18-19 October should be responsible for the first shock, but they did not find an obvious, unique association between the shock and a particular solar event.Moreover, the nature of the shock-associated enhancement on 20 October is not so obvious.Bazilevskaya et al. (1994) argued that the shockassociated protons were apparently trapped in the region with a very small propagation mean free path γ < 0.02 AU behind the first shock.Another scenario for the shock-associated particles is proposed in this article.The solar wind data clearly show two magnetic walls, where solar protons accelerated near the flare site later might be trapped between these walls with a propagation mean free path γ <0.1 AU. Data and methods In this work, the precursory increase and the effect of shockassociated particles observed on 20 October 1989 are studied using data obtained by different space and ground-based detectors.The method of data processing was used by Struminsky (2001) to separate CR variations of different origin.All necessary data were downloaded from the SPIDR database (http//spidr.ngdc.noaa.gov)and the home page of the Moscow NM. Table 1 shows some characteristics of NM stations used in this study.Longitudes of these stations are nearly equal and they look in the same direction in the equatorial plane, so the effects of the CR anisotropy are negligible.The count rate averaged for eleven hours before the GLE onset on 19 October 1989, with N 0 taken as a reference level.Values of N 0 normalized to 18NM64 differ by about 1.15%.Apparently, this is a maximum value of possible geomagnetic variations in Moscow. In the general case, the NM count rate in some particular time moment is proportional to where g(E, x) -the NM sensitivity to primary cosmic rays, J (E) -the differential energy CR spectrum, E c -the current effective cutoff energy.In this study, the NM sensitivity below 2000 MeV is assumed to be equal to g(E, x) = 6.25 • 10 −9 E 3.17 and above this energy it is g(E, x) = 0.0219 • E 1.15 .Values of the NM sensitivity above 3 GV are well known (see Clem and Dorman, 2000 and references therein).Belov and Struminsky (1997) estimated the NM sensitivity for lower rigidities from GLE data. The effective cutoff energy, E c , in Moscow was estimated for each hour on 20 October 1989 by the method of Fluckiger et al. (1986), using the modified Dst index.The modified Dst index accounts for only the effects of the magnetospheric ring current and, therefore, should describe more properly changes in the cutoff rigidities (Struminsky and Lal, 2001).The reference NM count rate should be proportional where E 0 c is an effective cutoff energy during geomagnetic quite periods and J 0 (E) is the differential energy spectrum of primary cosmic rays. The solar cosmic ray variations can be expressed by an integral where δJ (E) sol is a spectrum of solar protons and E max is their maximum energy.Hourly average uncorrected data from two integral channels of the GOES-7 proton detector were used to evaluate the spectrum of solar and shockassociated protons.These channels measured integral proton flux within 84-200 MeV and 110-500 MeV energy bands. Assuming the spectrum of solar protons in a form of power law function within the interval of 84-500 MeV, one can estimate its power law index and normalizing constant from the observed ratio of channel count rates (Belov et al., 1995).If the observed enhancement is really caused by this population of particles, then by varying E max for a given time moment it is possible to obtain the desired coincidence of 0.1% between the observed and expected values of solar CR variations.If the derived spectrum δJ (E) sol beginning from some energy E is less than 0.1 • J 0 (E), then it is assumed that the spectrum is too soft and cannot result in the observed variations, so E max = E.The NM response to the shock-associated particles can be estimated using the same technique.An estimate for the geomagnetic variations is Since E c would likely be within a range of allowed and forbidden trajectories called the cosmic ray peneumbra, the integral (4) provides the upper limit of possible geomagnetic variations.The effect of penumbra should be considered carefully to obtain a better accuracy of geomagnetic variations. By removing the solar and geomagnetic CR variations from the total CR variations, a value of the precursory increase for mid-and high-latitude NMs can be estimated. Precursory increase Figure 1 from top to bottom illustrates the step-by-step calculations of the Forbush precursory increase in Moscow.The power law index γ (second panel) was estimated from the ratio of integral proton fluxes within 84-200 MeV and 110-500 MeV energy bands (first panel) that were measured aboard GOES-7.The integral (3) was calculated within limits of E c and E max (third panel). By varying the maximum energy of solar protons E max , a reasonable coincidence between the observed (total) CR variations and the calculated solar CR variations for the Moscow NM is achieved during the isotropic phase of the 19 October 1989 GLE (fourth panels).However, there is a visible discrepancy of about 2% from 05:00 UT until about 12:00 UT, i.e. the spectrum deduced from the GOES data appeared to be too soft to explain the observed variations.This discrepancy coincides in time with the precursory increase observed by low-latitude NMs and should be caused by the same variations of CR with very hard spectrum δJ (E) int .The precursory increase lasted from ∼05:00 UT until ∼12:00 UT, so galactic cosmic rays penetrated easily the first shock and were scattered effectively by the magnetic barrier behind the shock.This implies that the propagation mean free path of CR protons downstream of the barrier was ∼0.1 AU. Shock-associated increase Figure 2 shows the modulation parameter B • V , where B is the total IMF magnetic field strength and V is the solar wind velocity; the GOES-7 proton data are in linear scale and the interplanetary variations are deduced for the Apatity and Moscow NMs.The B • V parameter plays a crucial role in theoretical models of Forbush decreases (Wibberenz et al., 1998) It is clear from Fig. 1 that CR variations estimated from the GOES data are greater than the variations observed in Moscow after 12:00 UT.This time represents a real onset of the Forbush decrease in Moscow (Fig. 2, fourth panel).The increase in shock-associated particles started at about the same time and coincided with the first maximum of the B • V parameter.Cosmic ray variations in Apatity show the similar behavior (Fig. 2, third panel). The spectrum of shock-associated particles deduced from GOES data was very soft (Fig. 1, second panel).The maximum energy of these particles was less than 1 GeV and they resulted in the statistically significant enhancement in Moscow only due to changes in the geomagnetic cutoff (Fig. 1, third panel).Note that protons with approx.energy of the atmospheric cutoff were observed between 13:00-19:00 UT in the stratosphere above Moscow (Struminsky, 1992). The shock wave on 20 October 1989 was one of the strongest in the 22nd solar cycle, considering its effects on cosmic rays and the magnetosphere, so the obtained value of 1 GeV may provide the upper energy limit of the shock acceleration at a distance of 1 AU in the heliosphere. However, the shock-associated particles arrived well after SSC and looked like they were trapped between two magnetic barriers.These barriers correspond to maximums of the B • V parameter: B 1 • V 1 = 22.7 nt • 631 km/s at 13:00 UT and B 2 • V 2 = 22.9 nt • 785 km/s at 18:00 UT.A distance between the magnetic walls was ∼0.09 AU at the Earth's orbit and it should be greater near the Sun.Apparently, this is a radial dimension of the magnetic trap and a mean free path of particles inside the trap in this direction. Acceleration versus trapping The nature of the shock-associated particles on 20 October 1989 is not clear.The main question is, were these particles accelerated near the flare site and then injected into the trap or were they accelerated by the shock from thermal energies? In the case of the classical shock acceleration, the spectrum of shock accelerated particles should become harder, if a shock wave is approaching the observer, but the opposite is true for 20 October 1989 (Fig. 1, second panel).However, it would not be so for a shock-like acceleration in a compressing trap. Let us discuss possibility of particle acceleration between the converging magnetic barriers.An average relative increase of particle energy as a result of one acceleration act (scattering from front and rear barriers) would be and a number of possible acceleration acts is where T ≈ 24 hours is a propagation time of the magnetic trap to Earth.The initial energy of the particles might increase by a factor of β k = 3.68 inside the trap on its way to the Earth.The energy of protons should be at least of several hundred MeV in order to increase their spectrum up to 1 GeV near the Earth. Therefore, the shock-associated protons should be accelerated primarily near the Sun, possibly along with the GLE protons, and then they should be injected into the trap.The proton spectrum in the trap was softer than in the GLE main phase, because the trap was not effective for energies of several hundred MeV.Note that this interplanetary structure effected simultaneously only 1-2% of galactic cosmic rays.Lario and Decker (2002) reached a similar conclusion independently that the high-energy proton population observed around the shock passage is not a locally shock-accelerated population, but rather a population confined and channeled by a complex magnetic field structure. Summary The Forbush effect on 20 October 1989 was observed on the background of the 19 October 1989 GLE during the large geomagnetic storm.Effects of solar and geomagnetic variations were eliminated from total CR variations of the Apatity and Moscow NMs in order to estimate the Forbush precursory increase and the shock-associated enhancement on 20 October 1989.Different data sets of ground and space borne detectors were used in this procedure. Solar cosmic ray variations were deduced from the GOES-7 proton data, assuming changes in the geomagnetic cutoff in Moscow.A reasonable agreement between observed and estimated solar CR variations was obtained during the GLE isotropic phase early in the morning of 20 October 1989.The estimated geomagnetic variations of galactic CR were less than 1.5%. The Forbush pre-increase had a very hard spectrum with maximum energy more than 10 GeV.The shock-associated particles had a soft spectrum with maximum energy less than 1 GeV.They caused only a small increase in the Moscow NM count rate due to changes in the cutoff rigidity.The precur-sory increase and the shock-associated enhancements were clearly separated in time and, therefore, they should be attributed to different populations of CR. The Forbush pre-increase was caused by a single reflection of galactic cosmic rays from the shock.The shock-associated particles were trapped between two magnetic walls.These particles were accelerated close to the Sun; their initial energy might increase by a factor of four only on the way to Earth. Fig. 1 . Fig. 1.Proton flux within 84-200 and 110-500 MeV energy bands measured by the GOES-7 detector; a power law index of the proton differential energy spectrum deduced from the above data; estimated values of the maximum energy of solar protons and the effective cutoff energy in Moscow; registered total (open down triangles) and estimated solar (black up triangles) CR variations for the Moscow NM. Fig. 2 . Fig. 2. The total IMP magnetic field strength (nT) multiplied by the solar wind velocity (km/s); proton flux within 84-200 and 110-500 MeV energy bands measured by the GOES-7 detector; variations of interplanetary origin estimated for the Apatity (open triangles) and Moscow (open diamonds) NMs. Table 1 . Characteristics of NM stations
4,127.2
2002-08-31T00:00:00.000
[ "Physics" ]
Climate migration in Asia This paper investigates the role of climatic factors in the migration decision. We use international migration flows between 198 origin countries and 16 OECD countries. We focus on the difference in the effect of climatic factors by region. Asia is an interesting region to study this relationship, because it is the most populated region in the world and the most affected one by climate change. Temperature has a smaller effect on migration towards OECD countries in Asia compared to Europe, Africa, and North America. For disasters, we only find a stronger effect on migration in Asia compared to Africa. Temperature matters in most regions while disasters do not. Generally, higher temperatures increase migration flows except in Asia, South America, and the Pacific. Introduction The last couple of decades have seen a large increase in mobility in Asia (Asian Development Bank, 2012).This holds for movements within and across borders and is focused towards urban areas.Asia is the region with the largest number of international migrants (41 percent of total international migrants, see UN, 2017a) and has seen the largest increase in international migrants across all regions of the world from 2000 to 2017 (about 62 percent, see UN, 2017a).Among the top 5 origin countries of international migrants are four Asian countries: India, Russia, China, and Bangladesh.The increase in migration has almost been exponential.Figure 1 presents the annual total migration flows from all Asian countries towards 16 OECD countries included in our data set.We observe a strong upward trend with drops around the 1997 Asian Financial Crisis and the Global Financial Crisis.Over our observed time period we see that migration flows in our sample almost quadrupled. At the same time, this region will be heavily affected by the impacts of climate change in the future (Asian Development Bank, 2012).Research has shown that climate change symptoms include an increase in the frequency, severity, and likelihood of weather-related disaster and increases global temperatures (IPCC, 2012).In fact, Asia is already the region most prone to disasters (UN, 2017b) and people are already displaced by disasters (Asian Development Bank, 2012).Its topography, with islands, deltas, mountains, and deserts makes it prone to different kinds of climate change impacts.These include, for example, weather-related disasters (storms, floods, droughts, and extreme temperature events), sea-level rise, water scarcity, food insecurity, health impacts due to climate-sensitive diseases, and ocean acidification (UN, 2017b). The combination of being the most populous region in the world and the most affected region by impacts of climate change makes Asia an interesting region to study the effects of climatic factors on migration.In this paper, we investigate whether climatic factors such as temperature and weather-related disasters are drivers of international migration out of all Asian countries.While country-specific studies in Asia exist (e.g.Bardsley andHugo 2010 andBohra-Mishra et al. 2014), this is the first panel data study that uses migration flows out of all Asian countries over a long period of time (36 years).To be precise, we use data for migration flows from 1980 to 2015 out of all Asian countries towards 16 OECD destination countries. Several interesting results stand out.First, we find that temperature, but not weather-related disasters, have a significant direct effect on migration in our sample.Temperature has a smaller effect on migration towards OECD countries in Asia compared to Europe, Africa, and North America.For disasters, we only find a stronger effect on migration in Asia compared to Africa.Temperature matters Climate migration in Asia in most regions while disasters do not.Generally, higher temperatures increase migration flows except in Asia, South America, and the Pacific.This result contradicts studies done on larger data sets by, for example, Aburn and Wesselbaum (2019) who use flows out of 198 origin countries and find a significant positive direct effect of temperature on migration.This result does relate to the discussion about whether climatic shocks can trap populations.Papers such as Gray and Mueller (2012) or Noy (2017) argue that climatic shocks can reduce mobility, which is what we find in our sample.Finally, we find evidence of non-linear effects of temperature on migration with more migration out of countries that origin relatively small, large increase in temperature anomalies respectively.In contrast, we do only find a significant linear effect of weather-related disaster in our sample. The paper extends the the existing literature on the climate-migration relationship in Asia.Alam (2003) shows that changes in rainfall patterns, via affecting droughts and agricultural productivity, have affected migration in Bangladesh.Further, Gray and Mueller (2012), again for Bangladesh, find that crop failure has an effect on migration in a sample of 1,700 households between 1990 and 2004.Using cell phone operator data to measure mobility, Lu et al. (2016), study the displacement and migration after Cyclone Mahasen affected Bangladesh in 2013.They do not find permanent displacement effects.Dun (2011) documents emigration out of floodprone eras in the Mekong Delta in Vietnam. Data In this paper, we combine data on international migration flows between 198 origin countries and 16 OECD countries with data on temperature and weather-related disasters.Migration data is taken from the paper by Aburn and Wesselbaum (2019).They compile a panel data set of migration from 198 origin countries to 16 OECD destination countries from 1980 to 2015 on an annual basis.The data is taken from the 2015 Revision of the United Nations' Population Division and has been merged with data from the OECD and Ortega and Peri (2013).This migration data set covers only regular, permanent migration, but excludes undocumented immigration.In addition, it will also include people who have already entered a destination country and have changed their visa status (e.g. from temporary to permanent).The choice of destination countries is dictated by data availability.Our destination countries include countries belonging to the 20 countries in the world that have (i) the largest immigrant stock and (ii) the largest number of international migrants in 2015.Our data set covers a sizable amount of migrants with almost five million migrants per year towards the end of our sample. We are interested whether climate variables affect the decision to migrate and whether these effects are different in Asia compared to other continents.We use two different variables to measure changes in climatic conditions.We use temperature as a measure of slow-moving changes and weather-related disasters as a measure of sudden events.Temperature is measured as temperature anomalies (in degrees Celsius) relative to the monthly average from 1951 to 1980.The data is taken from the Berkeley Earth database.They compute a regional temperature field from weather stations.This temperature field is then averaged to obtain a country-by-year measure of temperature.Weather-related disasters include storms, droughts, floods, and extreme temperature events.Data is taken from the EM-DAT database from the Centre for Research on the Epidemiology of Disasters (CRED).An event is classified as a disaster if at least one of the following criteria holds: (i) ten or more people killed, (ii) hundred or more people affected, (iii) a state of emergency is declared, or (iv) a call for international assistance is issued. Finally, we also control for the effect of income on the migration decision, as income is usually found to be a driver of migration.We use data for per capita Gross Domestic Product (GDP, for short) as a proxy variable for income.Data is taken from the World Bank database and is GDP per capita at constant 2010 U.S. Dollar.Other control variables are population, political freedom, and life expectancy.The population variable is total population taken from the World Bank database.The measure of the political freedom is the polity2 variable from the PolityTM IV project (Center for Systemic Peace).This variable varies between 10 (strongly democratic) and -10 (strongly autocratic).Life expectancy (average life expectancy, in years) is also taken from the World Bank database.Finally, the share of agricultural land measure the share of total land area used as arable land, permanent cropland, and permanent pastures. Table 1 provides summary statistics for all our variables.Temperature anomalies vary between − 1.9 to 3.03 degrees Celsius with an average of 0.52 degrees Celsius.The results also show a high degree of heterogeneity in the increase of temperature across countries.Figure 2 in the appendix plots the average temperature anomaly across our sample.We observe a clear upward trend over the entire sample period.In Asia, the countries with the lowest average temperature anomaly in our sample are Hong Kong, Bangladesh, and Laos. 1 The countries with Econometric approach We estimate the following fixed-effects regression model2 where our dependent variable is the current migration flow, Migration ij,t , between origin country i and destination country j at time t. 3 These flows are explained by the set of variables in X i,t such as temperature, weather-related disaster, and income.We are interested in the effects of our independent variables captured by the vector i .Then, ij,t are i.i.d.error terms with mean 0 and variance 2 .All regressions use clustered standard errors at the country-pair level (Mayda 2010 andOrtega andPeri 2013).By doing so, we address the problem that there are likely common unobserved random shocks at the country-pair level.As a consequence, all observations with in each country-pair would be correlated.All regressions, use destination country fixed effects, j , and time (year) fixed effects, t .Since we interact the climate variables with a continent dummy, we cannot include a origin country fixed effects (due to collinearity), but the continent dummy will capture factors such as culture. We want to stress that we-explicitly-do not control for other socioeconomic variables such as unemployment, conflict, or immigration laws.Most importantly, including these and other variables would lead to the typical "bad control" problem, where control variables are themselves endogenous (Hsiang et al. 2013).This would lead to biased estimates.Further, we want to investigate the direct (or total) effects of climate variables on migration and, therefore, have to exclude control variables that would change the interpretation to partial effects. Finally, it should be highlighted that a key advantage of this regression is that our independent climatic variables are plausibly exogenous and reverse causality (migration drives temperature) is not a concern.At the same time, a limitation of this reduced-form approach is that it prevents us from understanding how exactly climatic variables affect migration.Nevertheless, causal inference is obtained by the random variation in climatic variables within each country over time.Another limitation is that we cannot control for the stock of migrants, as we do not have data on this variable (over time and space). Main results Our estimation results are presented in Table 2. Model 1 estimates the direct effect of temperature on migration.We find that this effect is significantly negative ( p < 0.01 ).However, we have to consider the joint effect taking into account the interaction effect by continent to understand the total effect.We find that the joint effect of temperature is significantly positive in Europe, Africa, and North America but negative in the Pacific.Notice that this is relative to the base continent, which is Asia.Therefore, migration from Asian countries are less affect by temperature relative to Europe, Africa, and North America but more strongly compared to the Pacific.We do not find a significant difference in the effect between Asia and South America. In Model 2, we add weather-related disasters and their interaction by continent.The above mentioned results hold, except that the difference to Europe is now insignificant.For disasters, we find that the effect on migration is larger compared to Asia in Europe, Africa, South America, and the Pacific but identical to North America. Finally, Model 3 includes control variables: log GDP, log population, policy, life expectancy, and the share of agricultural land.This again confirms our findings for the effect of temperature discussed for Model 1.However, adding controls affect the results for disasters.We find that the only significant difference for the effect of disasters on migration is between Asia and Africa.The results show that there is a larger effect of disasters on migration in Asia relative to Africa.We do not find significant differences between Asia and the other continents.Temperature has a significant negative marginal effect in Asia ( − 0.227, p = 0.000), while it has positive marginal effects in Europe (0.129, p = 0.000), North America (0.188, p = 0.004), and negative effects in South America ( − 0.221, p = 0.010) and the Pacific ( − 0.683, p = 0.001).Disasters have an insignificant effect on migration in Asia (0.019, p = 0.107), Europe (0.029, p = 0.256), North America (0.013, p = 0.261), South America ( − 0.046, p = 0.250), and the Pacific (0.112, p = 0.172) but a negative significant effect for Africa (-0.063, p = 0.012). Overall, we can draw the following conclusions.Temperature has a smaller effect on migration towards OECD countries in Asia compared to Europe, Africa, and North America.For disasters, we only find a stronger effect on migration in Asia compared to Africa.Temperature matters in most regions while disasters do not.Generally, higher temperatures increase migration flows except in Asia, South America, and the Pacific. The negative effect is surprising as it contradicts findings in larger samples by Backhaus et al. (2015), Cattaneo and Peri (2016), Cai et al. (2016), or Aburn and Wesselbaum (2019).This negative sign implies that higher temperatures decrease migration flows.This result relates to the discussion about whether climatic shocks are drivers or inhibitors of migration.Studies such as Feng and Oppenheimer (2012), Hunter et al. (2013), and Aburn and Wesselbaum (2019) support the driver hypothesis.Other studies, e.g. Henry et al. (2004), Gray and Mueller (2012), Black et al. (2013), and Noy (2017) find that climate shocks can reduce mobility.The idea is that climatic conditions can trap people, for example, because households use asset holdings to smooth consumption over negative (climatic) shocks or because the spatial insurance channel becomes less efficient (see Dillon et al. 2011). While our aggregate data does not allow us to look inside the Black Box and investigate why migration reacts differently across regions, some mechanisms are plausible.Table 3 presents the means of the variables in our analysis by continent.Some factors that could explain the different effects of temperature across continents are cultural differences (e.g. the role of families and social networks), different policy regimes (Asia is the most autocratic region in the sample) which could affect immigration laws and regulations around remittances, the role of agriculture in the generation of income, and a very high population and population density. Conclusion In this paper, we investigate the role of climatic factors for international migration decision in Asia.Asia is an interesting region to study this relationship because it is the most populous region in the world and the most affected region by climate change.Our contribution is to offer a panel data study that uses migration flows out of all Asian countries over a long period of time (36 years) in contrast to the existing country-specific studies in Asia, for example, Bardsley and Hugo (2010) and Bohra-Mishra et al. (2014). In contrast to studies in larger panel data sets (Cattaneo andPeri 2016, or Aburn andWesselbaum 2019) we find that temperature has a negative significant effect on international migration flows in Asia.Surprisingly, this effect is negative while other papers (e.g.Feng and Oppenheimer 2012 or Hunter et al. 2013) find a positive effect.This result supports the viewpoint that climatic shocks can trap populations.Papers such as Gray and Mueller (2012) or Noy (2017) argue that climatic shocks can reduce mobility, which is what we find in our sample.We do not find a significant effect of weather-related disaster in our sample. Several limitations of our paper should be noted.Our findings hold for international migration but do not allow to draw conclusions about other interesting and relevant migration processes, such as internal or rural to urban migration flows.Further, due to data limitations we cannot account for undocumented migration flows. Fig. 1 Fig. 1 Total Migration Flows.Notes: Total international migration flows (in million) out of Asia over time Table 1 Descriptive statisticsClimate migration in Asia the largest average temperature anomaly in Asia are Russia, Turkmenistan, and Mongolia.Weather-related disaster (see Fig.3in the appendix) have almost tripled since the beginning of our sample in 1980.The countries with the lowest number of disasters in Asia are the United Arab Emirates, Qatar, and Singapore, while China, India, and the Philippines have the highest number of weather-related disasters in our sample.We find that countries have different exposures to disasters ranging from 0 to 33 per year. Table 2 Main results Notes: Dependent variable: log migration flow.Controls are: log GDP, log population, policy, life expectancy, agricultural land.Standard errors are clustered at the country-pair level and shown in parenthesis.All regressions use year and destination country fixed effects.Constant and continent dummies not shown.Significance levels: ***p < 0.01 , ** p < 0.05 , * p < 0.10 Table 3 Continent means Notes: The Dunn test tests for stochastic dominance among multiple pairwise comparisons
3,872.4
2023-02-22T00:00:00.000
[ "Environmental Science", "Geography", "Economics" ]
Term Spotting: A quick-and-dirty method for extracting typological features of language from grammatical descriptions Starting from a large collection of digitized raw-text descriptions of languages of the world, we address the problem of extracting information of interest to linguists from these. We describe a general technique to extract properties of the described languages associated with a specific term. The technique is simple to implement, simple to explain, requires no training data or annotation, and requires no manual tuning of thresholds. The results are evaluated on a large gold standard database on classifiers with accuracy results that match or supersede human inter-coder agreement on similar tasks. Although accuracy is competitive, the method may still be enhanced by a more rigorous probabilistic background theory and usage of extant NLP tools for morphological variants, collocations and vector-space semantics. Introduction The present paper addresses extraction of information about languages of the world from digitized full-text grammatical descriptions. For example, the below reference describes a language called Kagulu, whose grammatical properties are of interest for various linguistic predicaments. Petzell, Malin. (2008) The Kagulu language of Tanzania: grammar, text and vocabulary (East African languages and dialects 19). Köln: Rüdiger Köppe Verlag. 234pp. The typical instances of such informationextraction tasks are so-called typological features, e.g., whether the language has tone, prepositions, SOV basic constituent order and so on, similar in spirit to those found in the database WALS wals.info (Dryer and Haspelmath, 2013). Given its novelty, only a few embryonic approaches (Virk et al., 2019;Wichmann and Rama, 2019;Macklin-Cordes et al., 2017;Hammarström, 2013;Virk et al., 2017) have addressed the task so far. Of these, some are word-based and some combine words with more elaborate analyses of the source texts such as frame-semantics (Virk et al., 2019). All approaches so far described require manual tuning of thresholds and/or supervised training data. For the present paper, we focus on the prospects of term spotting, but in a way that obviates the need for either manual tuning of thresholds or supervised training data. However, this approach is limited to the features for which a (small set of) specific terms frequently signal the presence thereof, e.g., classifier, suffix(es), preposition(s), rounded vowel(s) or inverse. Term spottting is not applicable for features which are expressed in a myriad of different ways across grammars, e.g., as whether the verb agrees with the agent in person. It may be noted that the important class of word-order features, which are among the easiest for a human to discern from a grammar, typically belong to the class of non-term-signalled features unless there is a specific formula such as SOV or N-Adj gaining sufficient popularity in grammatical descriptions. Term-signalled features are, of course, far simpler to extract, but not completely trivial, and hence the focus the present study. The general-form premises to the problem addressed here are as follows. There is a set D of rawtext descriptions of entities from a set S, such that each d ∈ D mainly describes exactly one s ∈ S. If a term k describing a property of objects in S occurs in a document d to a significant degree, the object s described in d actually has the property signalled by k. These premises apply to other domains and texts, e.g., ethnographic descriptions, than the linguistic descriptions in the present study. Judging from the surveys of Nasar et al. (2018) and Firoozeh et al. (2020), the premise that each d ∈ D mainly describes exactly one s ∈ S is not dominant across scientific domains. Consequently most work has focussed on the broader tasks of extracting key-insights and salient keywords from scientific documents. We are not aware of any work in other domains on the specific task addressed in this paper. Data The data for the experiments in this essay consists of a collection of over 10 000 raw text grammatical descriptions digitally available for computational processing (Virk et al., 2020). The collection consists of (1) out-of-copyright texts digitized by national libraries, archives, scientific societies and other similar entities, (2) texts posted online with a license to use for research, usually by university libraries and non-profit organizations (notably the Summer Institute of Linguistics), and (3) texts under publisher copyright where quotations of short extracts are legal. For each document, we know the language it is written in (the meta-language, usually English, French, German, Spanish, Russian or Mandarin Chinese, see Table 1), the language(s) described in it (the target language, typically one of the thousands of minority languages throughout the world) and the type of description (comparative study, description of a specific feature, phonological description, grammar sketch, full grammar etc). The collection can be enumerated using the bibliographical-and metadata contained in the open-access bibliography of descriptive language data at glottolog.org. The grammar/grammar sketch collection spans no less than 4 527 languages, very close to the total number of languages for which a description exists at all (Hammarström et al., 2018). Figure 1 has an example of a typical source document -in this case a German grammar of the Ewondo [ewo] language of Cameroon -and the corresponding OCR text which illustrates the typical quality. In essence, the OCR correctly recognizes most tokens of the meta-language but is hopelessly inaccurate on most tokens of the vernacular being described. This is completely expected from the typical, dictionary/training-heavy, contemporary techniques for OCR, and cannot easily be improved on the scale relevant for the present collection. However, some post-correction of OCR output very relevant for the genre of linguistics is possible and advisable (see Hammarström et al. 2017). The bottom line, however, is that extraction based on meta-language words has good prospects in spite of the noise, while extraction of accurately spelled vernacular data is not possible at present. Model At first blush, the problem might seem trivial: simply look for the existence of the term and/or its relative frequency in a document, and infer the feature associated with the term. Unfortunately, to simply look for the existence of a term is too naive. In many grammars, terms for grammatical features do occur although the language being described, in fact, does not exhibit the feature. For example, the grammar may make the explicit statement that there are "no X" incurring at least one occurrence 1 . Also, what frequently happens is that comments and comparisons are made with other languagesoften related languages or other temporal stagesthan the main one being described 2 . Furthermore, there is always the possibility that a term occurs in an example sentence, the title of a reference or the like. However, such "spurious" occurrences will not likely be frequent, at least not as frequent 1 One example is the Pipil grammar of Campbell (1985, 61) which says that Pipil has no productive postpositions: "It should be noted that unlike Proto-Uto-Aztecan (Langacker 1977:92-3) Pipil has no productive postpositions. However, it has reflexes of former postpositions both in the relational nouns (cf. 3.5.2) and in certain of the locative suffixes (cf. 3.1.3)" (Campbell, 1985, 61 as a term for a grammatical feature which actually belongs to the language and thus needs to be described properly. But how frequent is frequent enough? We will try to answer this question. Let us assume that a full-text grammatical description consists of four classes of terms: Genuine descriptive terms: Terms that describe the language in question. Noise terms: Descriptive terms that do not accurately describe the language in question (i.e., through remarks on other languages or of things not present, as explained above). Meta-language words: Words in the metalanguage, e.g., 'the', 'a', 'run' if the meta-language of description is English, that are not linguistic descriptive terms. Language-specific words: Words that are specific to the language being described but which do not describe its grammar. These can be morphemes of the language, place names in the language area, ethnographic terms etc. We are interested in the first class, and in particular, to distinguish them from the second class. Except for rare coincidences, the words from these two classes do not overlap with the latter two, so they can be safely ignored when counting linguistic descriptive terms. Of the terms that genuinely describe a language, we would expect their frequency distribution in a grammar to mirror their functional load (Meyerstein, 1970), i.e., their relative importance, in the language being described. Thus we assume each language has a theoretical distribution L(t) of terms t which is our object of interest. However, as noted, grammars typically also contain "noise" terms which distort the reflection of L(t). A simple model for the frequency distribution of the terms of a grammar G(t) is that it is composed merely of a sample of the "true" underlying descriptive terms L(t) and a "noise" term N (t), with a weight α balancing the two: For example, if a language actually has duals, L(dual) > 0, perhaps close to 0.0 if there are only a handful of nouns with dual forms, but higher if there are dual pronouns, dual agreement, special dual case forms and so on. For most languages, we expect the functional load of verbs to be rather high. The purity level α, captures the fraction of tokens which actually pertain to the language, as opposed to those that do not. (Those tokens are typically of great interest for the reader of the grammar -they are "noise" only from the perspective of extraction as in the present paper.) Suppose now that we have several different grammars for the same language. As they are the describing the same language, their token distributions are all (independent?) samples of the same L(t), but there is no reason to suppose the noise level and the actual noise terms to be the same across different grammars. Thus we have: If we had infinitely many independent grammars accurately describing a language (and nothing else), their combined distribution would converge to L(t) in the limit. Without the luxury of so many representative grammars, we can still attempt the simpler task of estimating the purity levels α i of each grammar. That is, given actual distributions G 1 (t), . . . , G n (t) how can we make a heuristic estimate of α i ? The following procedure suggests itself. Take each term t for each grammar G i and calculate the generality of its incidence g i L (t) by comparing the fraction in G i (t) to the fraction of t in all other grammars for the language L. For example, suppose G i (dual) = 0.1 for some grammar G i . Maybe for two other grammars of the same language, G j (dual) = 0.01 and G k (dual) = 0.00, this term barely occurs. The term dual would then have poor generality Table 2. Terms like triphthongs, gender and stress have a role in describing the language and consequently show a generality close to 1.0, while "noise" terms like cojocaru and ghe are less common as items of description of the Romanian language. Grammars with lots of terms with poor generality have a high level of noise, and, conversely, grammars where all terms have a reciprocated proportion in other grammars are pure, devoid of noise. Thus, α i can be gauged as: To remove outliers and speed up the calculation by removing hapax terms, in the experiments below, we measure all frequencies by logarithm. We now return to the question "how frequent is frequent enough?". We can now rephrase this as: does the frequency of a term in a grammar exceed its noise level (1-α)? Given that we know α i for a grammar G i , let us make the assumption that the fraction (1-α i ) of least frequent tokens are "noise". Simply subtracting the fraction (1-α i ) of tokens of the least frequent types effectively generates a threshold t separating the tokens being retained versus those subtracted. For example, the grammar of Romanian by Cojocaru (2004) has an α i of 0.81 and contains a total of 83 365 tokens. We wish to subtract (1 − 0.81) · 83365 ≈ 15839 tokens from the least frequent types. It turns out in this grammar that this removes all the types which have a frequency of 9 or less, rendering the frequency threshold t = 9. Let us look at an example. Table 3 has a list of grammars/grammar sketches of Romanian. Each grammar has a corresponding α i purity level as described above, the total number of tokens, and the frequency threshold t induced by α and the token distribution. The last three columns concern the terms masculine, feminine and neuter respectively. The cells contain the frequency of the corresponding term, as well as the fraction of pages on which it occurs. The fraction of page occurrences is, of course, similar to, and highly correlated with the fraction of tokens but is often easier to interpret intuitively. We show it here for reference, although it is not advantageous to make use of in any of the above calculations. Thus, for example, in Cojocaru (2004) the term masculine occurs 240 times in total, distributed onto 74 of the total 184 pages (≈ 0.40). The cells with a frequency that exceeds the threshold t for their corresponding grammar are shown in green, indicating that the term in question is probably genuinely describing the language. In this case, by majority consensus, we can infer that the language Romanian [ron] does have all three of masculine, feminine and neuter. Thanks to a large manually elaborated database of languages with classifiers 3 (Her et al., 2021) we were able to do a formal evaluation of extraction accuracy for this feature. We extracted the feature classifier(s) from 7 284 grammars/grammar sketches written in English spanning 3 220 languages. Each language was assessed as per the majority vote of the extraction result of each individual description, with ties broken in favour of a positive result. For languages where only one description exists, the noise-level was taken to be the average noise-level of grammars of other languages of similar size (as measured by number of tokens A comparison between the Gold Standard database and the extracted data is shown in Table 4. The overall accuracy is 89.1%, to be compared with human inter-coder agreement on similar tasks, i.e., 85.9% or lower (as per Donohue 2006 andPlank 2009, 67-68). Not surprisingly, the method has better precision ( 512 512+34 ≈ 0.94) than it has recall ( 512 512+317 ≈ 0.62). The majority of errors are languages with classifiers which are not recognized as such by the term-spotting technique. Simple inspection reveals that in the majority of these cases, a different term, e.g., "enumerative" is used in place of the term in question. There are also errors where the automatic technique infers a slightly too high threshold for languages which have grammars from a large temporal range. The fact that descriptive tradition changes over time may be reason to refine the procedure for calculating reciprocated proportions. We may add a few remarks on some obvious refinements. Excluding negative polarity mentions, by which we mean mentions where no|not|absent|absence|absense|lack| neither|nor|cannot occurs in the same sentence as the sought-after term, make no significant change to the overall accuracy. Using the temporally latest description only (instead of a majority vote) to assess the status for a language with several grammars, also made no significant change to the overall accuracy (in fact, it decreased by 2 percentage points). Furthermore, using the most extensive description only, i.e., the longest grammar or longest grammar sketch if there are no full grammars, had a negative impact on overall accuracy (down by 8 percentage points). These results seem to speak in favour of making use of multiple witnesses for each language if they are available, even if they are of different lengths and ages. If these impressions generalize, length and age differences between grammarswhich are real -need to be addressed in a more sophisticated manner than simply excluding the old and short. The above evaluation is relevant for the case when there is a specific term (or an enumerable set thereof) associated with the desired feature. It then shows what accuracy one may expect without supplying a threshold or any other information than the keyword itself. Choosing the right term(s) for a given linguistic feature requires knowledge of the feature and the way is it often (not) manifested in the literature (cf. Kilarski 2013 on classifiers versus other kinds of nominal classification). Conclusion We have described a novel approach to the extraction of linguistic information from descriptive grammars. The method requires only a term of interest, but no manual tuning of thresholds or annotated training data. However, the approach can only address information that is associated with an enumerable set of specific terms. When this is the case, a broad evaluation shows that the results match or exceed the far more time-consuming manual curation by humans. Future work includes automated handling of collocations and morphological variants, vector-space lexical semantics, automated multi-lingual extraction and establishing the method on more rigorous probabilistic theory.
3,971.8
2021-08-12T00:00:00.000
[ "Computer Science" ]
A Glimpse at the Culture of Qajar Era-Public Attraction to Gambling in Qajar Era Games are among cultural manifestations of human communities which have continued to exist since the ancient times and developed during peaceful times; in the Qajar era, during the reign of Nasser-al-Din Shah, because relative peace and prosperity was brought about, the people, especially courtiers, showed great interest in games and spent some of their daily life on it. In this article we have attempted to present some of the angles and dimensions of a number of these games using the itineraries of orientalists and histories and works of that period. In this regard, we have borne in mind time, venue, extensiveness, customs and the reflection of these games in literature and based on evidences demonstrated that Qajar era is one of the most prominent periods in Iranian history and culture regarding the diversity and colorfulness of these games. Skillfulness of game players, large bets, struggles and arguments between opponents, special formalities, etc., are all the criteria for the acceptability and popularity of games in this age, which have been put into consideration. Introduction In the past, when games were not as variegated, people were compelled to play games to spend their free time and be entertained.Games have existed since the ancient times and have not developed recently; for instance, qaap-bazi goes back to the millennia before Christ and "backgammon has been discovered in the excavations around Nineveh." (Durant, 1989) The persistent presence of games in the course of history, particularly around Iran about which no comprehensive research has been done, makes their investigation and analysis necessary.some of these games in Qajar era and attempted to cast some light on their darker corners; time, venue, extensiveness, customs and the reflection of these games in the literature of the time have directed our efforts in this research.The basis of our work has been works written in that age and itineraries of orientalists who had travelled to Iran during the Qajar period.Indeed we do not claim that a complete observation has been made because not all the sources about that period have been published.Chess, backgammon, aas bazi (playing aces), ganjafeh (cards), and qaap bazi 1 are games we have pondered on.In the texts of this period, which mainly reflect court affairs, these games are frequently mentioned and as Pollack says, "chess is very much favored by the aristocratic class," (Pollack, 1982) and in part of a poem, Naqib-ul Mamalek, Nasser-al-Din Shah's storyteller who also has some poetry, says, "kingly game ought to be board or chess and AAS" (Naqib-ul Mamalek).From the five games mentioned above, there were debates about the legitimacy or illegitimacy of chess and backgammon games and were considered almost permissible (Pollack, 1982), but jacks (qaap bazi), ganjafe, and aas bazi were considered gambling, hence their illegitimacy.Of course, it should be noted that all games involving losing and winning money were considered haraam 2 ; therefore, above mentioned games, which involved winning and losing, were also considered haraam games.Chess and backgammon are well known, but ganjafeh, aas bazi, and jacks (qaap bazi) require more explanation.Card games played in that time included aas baazi (aces) and ganjafeh; ganjafeh was more native and "its cards, unlike common cards, were made in Iran" (Wills, 1989) and consisted of ninety cards "every eight of which were illustrated differently from other cards and playing them did not require thinking and creativity" (Chardin, 1993). Aas bazi has been described in the itineraries of travelers as "one of the oldest card games.These cards are made of small, rectangular, ivory pieces in which beautiful miniatures are illustrated in golden colors; these ivory pieces, some of which are still found, were later replaced by illustrated and varnished cardboards with 5 five-card sets called king, ace, queen, LAKAT and jack each distinguished from others with its yellow, green, red, black or golden background (D'almani, 1956) and Brugesche give a more accurate description of Aas cards: "Iranian cards which are made of cardboards have five types of illustrations with four special signs which add up to twenty cards.These cards' quintuplet of illustrations is organized in the same form of the European cards with the first one depicting the picture of a snake coiling itself around the Lion and Sun symbol; the second card is the king similar to the European card; the third card is the queen with a mother and child drawn on it; the fourth card is jack which is similar to the European jack and shows the head of a soldier, and the fifth card is Lakat which depicts a dancing woman and has no parallel in European cards.Iranians love playing aas and in addition to this, chess and backgammon are also among their most common hobbies."(Brugesche, 1989). Qaap bazi was a game played with a cube-shaped piece of bone plucked from the leg of a sheep; qaap (=qaab) which has four faces each having a different score, is similar to the common dice.The qaap was thrown in the air and the person who succeeded in having the desired face would be considered winner. Extensiveness Extensiveness of these games in the Qajar era was so high that a glimpse at the diaries of Amin Lashkar or Muhammad Ali Foroughi astonishes the reader; it is as if their entire time was spent on playing.These games were so variegated that Hedayat the Mokhber-ul Saltaneh became determined to collect them in a book (Hedayat, 2006), but failed to do so. Pervasiveness of the games gradually enabled some of the prominent dignitaries of the time to gain unbelievable mastery in a certain game; Qajar statesmen had become so dexterous in some games that Amir Nizam Garusi, a master among his peers in chess playing (Mo'ayyer-ul Mamalek, 2011), rivaled with Napoleon III (Ibid, 1976) as did Mehdi Qoli Khan the Majd-ud Doleh with Lord of Salisbury (Ibid, 1976).Indeed, it has been said that Iranians' method of playing differed with that of Europeans as it is the case with Majd-ud Doleh who in the presence of the king played chess with the Lord of Salisbury, several times chancellor of UK, and lost the game and said, 'I was not acquainted with European style of playing.Had we played in our own way I would have won.'His protest was interpreted for the Lord and the Lord, who flaunted his chess playing, stated, 'tell me the difference between the two; after getting acquainted I am prepared to play in your way.'They played another set and the Lord lost the game (Mo'ayyer-ul Mamalek, 2011). However, Pollack believes that there is no difference between Iranian and European chess playing rules (Pollack, 1982). There were even exaggerations about the skill of players as Zell-ul Sultan, Nasser-al-Din Shah's son claimed that he had never lost a chess game (Feuvrier, 1987).Among these games, Nasser-al-Din Shah was had a penchant for backgammon and gambling.From among former Qajar kings, it is been mentioned in the biographies of Fathali Shah Qajar that he played AAS on horseback and "distributing and collecting cards, and giving and taking of lost and won items were done by attendant footmen" (Mostofi, 1981).Knowing the games and having skill in them brought about distinction and privileges, but conversely, not knowing the game was considered some kind of backwardness and ignorance; thus, courtiers gained favor by perfecting their skills.They believed that for chess playing considerable intelligence was needed and since they considered chess as an improver of intelligence and memory, it was more highly regarded than board games and for this reason, Mozaffar-uddin Shah, who was slow in learning chess, was denounced (Hedayat, 2006). Among the aristocrats, most games were played with sumptuous bets.Winning and losing was mostly based on wagering money, sometimes estates and farms and once in a while "the wager was women" (Pollack, 1982).Precious items such as gold and turquoise rings were not an exception in these competitions. It has been said that the wagers were determined before the game, and princes and dignitaries invited to the presence of the kings were informed about the sum they had to carry on them for the game (Pollack, 1982). In the games played in the presence of the king or the Great Chancellor "the king was always in luck;" it means that players usually resigned fawningly in order to please His Majesty and the perpetual winner was the Shahanshah (Ibid; Itemad-u Saltaneh, 1966). Different Modes of Playing Apart from one-on-one playing which was the customary mode where the king's royal person himself played with someone (Itemad-u Saltaneh, 1966), one common playing manner in individual games was group playing; in this mode, they took either of these two courses: if an opponent defeated all other opponents, the defeated contrived to compete against the winner as a team, i.e. one person, as a representative of the team, played solely against the opponent and enjoyed the advices of his companions; skillful and solo chess playing of the Afghan man against Nasser-al-Din Shah's team of courtiers is an instance of this case (Mo'ayyer-ul Mamalek, 1983). In another mode, the two teams competed against each other, but the game was played between two representatives from each group and the kings observed their actions (Mo'ayyer-ul Mamalek, 2011;Itemad-u Saltaneh, 1966;Hedayat, 2006). Venue The center of the games was the court and court personages who had financial power were the main participants of game gatherings; outside the court, they were also the main supporters and the center of gamblers' nights.The house of Karim Shireh-I, Nasser-al-Din Shah's famous fool, was another gathering place outside the court for the contenders (Nasser-al-Din Shah, 1999). In this period, unlike Safavid age, we no longer see overt playing of games in coffeehouses; if there had been any such cases, they have been played clandestinely and we have no information about them.Violators of gambling prohibition law were severely punished and as Pollack observes, "if sentinels catch people gambling, flogging is certain" (Pollack, 1982).But this strictness should not be taken very seriously for it was easily circumvented or covered by giving bribes; infringements of the law were so extensive that it was announced in the newspaper that "some certain people ran gambling houses with the permission of officers and committed unlawful actions and officers ignored their actions by receiving some amount of money," therefore, the king commanded that "from then on, officers or other enforcers of law should by no means ask for money and those actions be totally prohibited" (Sa'dvandian, 2001). Time Games were usually played at nights when men were free of their daily work and continued well into midnight and sometimes into the morning after.In Ramadan, it had another atmosphere; as it has been reported, "Ramadan is the spring of gambling.In no other month is there so much gambling.In all cities of Persia, it is customary to gamble at Ramadan nights" (Eyn-u Saltaneh, 1998;Shahri, 1992;Divanbeygi, 2003).This was reprimanded at that time because the month of prayers and worshipping had transformed into a month of sins (Adib-ul Hokama, 1985). At the Shah's court, certain time was specified for playing during celebrations or hunting and some of the high-ranking people or princes were invited to play (Pollack, 1982). Democracy It is needless to say that in the patriarchal society of Iran, these games were prevalent among men and the reason is quite clear; women were assumed to be weak in terms of bodily and reasoning powers and if it was possible for them to play, there were some trivial games for them; in the customary humiliation and ridiculing of men, the rival was taunted by describing his abilities as fit for women's games.For example, there are some sentences in Ghomarnameh as follows: "This game (pachisi which was a rather simple game) is, in fact, a game for women, children and credulous merchants" (Ghomarnameh); or "since this is a game of women and children, they had no knowledge or creativity to contrive a trick or think of an innovation" (Ibid, 1976).Beside the author's constant differentiation between man and woman, in his categorization, he ascribes trivial games to women and children, and complex and difficult games for the "intelligentsia"; but we should add that there are evidences of women (of course, court women) playing backgammon and chess in Qajar era (Nasser-al-Din Shah;1999, 1998), but this rarely happened and altogether we may conclude that simple games were specific to women. Observance of Religious Rules It is interesting that when playing, people were committed to some religious traditions and beliefs; abstaining from gambling in the direction of kiblah was one of them (Ghomarnameh).Furthermore, since the money they won was regarded as unclean, they distributed among people (Amin Lashkar, 1999); at the court, this money was given out to the servants and valets (Pollack, 1982).It seems as if they were averse to spending this money and as Chardin has depicted, in the Safavid age, the income from brothels was spent on lighting city street torches so that in this way, that sinful money would go up in smoke (Chardin, 1966)! Incidents and Quarrels As there are evidences from the old times, trash talk was one of the requirements of games which sometimes led to fights (Onsor-ul Ma'ali, 2001;Mo'ayyer-ul Mamalek, 2011).It was customary to ask the gambler who could not pay the wager to do demeaning and shameful things in order to degrade him and thereby please the winner; gamblers usually started a fight over a fake card and they would almost tear each other apart with their daggers; if an unfortunate man lost the wager and was insolvent, he would struggle with all his might to free himself from the dog fastened to his neck biting him (Orsel, 1974). If looti (an honorable) man lost the gamble and had not money to pay his debt, the creditor would ask him to rise and three times touch the wall with his back, especially a little lower than his wait shawl, and say "looti has not lost", the creditor would give up his money; but most looties did not tolerate this degradation and found the money wherever they could and paid their wager.It occurred frequently that the looti plucked one hair from his mustache and gave to the creditor as a pledge, and the creditor not only accepted it, but displayed it to everybody and kept it in a clean paper because he was sure that this pledge was a better security than any gold, silver and jewelry (Mostofi, 1981;Shahri, 1992). Reflection in Literature and Politics Games had so permeated in the society that they were readily used in daily events and with them people expressed their intentions and were easily understood; games were also used in political events; in one instance, Ahmad Khan, son of Baqer Khan the Tangestani confronts the English with his troops; secretary of English consulate in Bushehr sends a letter thick with intimidations and humiliation to Ahmad Khan the Tangestani; Ahmad Khan replies him with the following poem: Ahmad, oh king of the good, May the mother fortune be your companion, We are four aces and do not fear, Your three harlots (Lakkat; my parentheses) and the two jacks (Bamdad, 2008) (Mostofi, 1981;Bamdad, 2008). The versifier uses a subtle pun in Khan-e Afshar and beautifully connects some expressions from backgammon (shehsdar, goshad, narraad, and khan-e afshar).Since literature like a mirror reflects the incidents and people of the society, we also find in the poetry of this age's poets considerable usage of game expressions.Among the trained poets of this period who attained perfection in the Constitutional Age are two renowned poets, Adib-ul Mamalek and Nasim-e Shomal in whose divans there are poems which contain major idioms from these games and they are important in this regard. Conclusion In this article, by providing several evidences, we looked into a small part of the history and culture of Qajar era and found out that one of the common recreational hobbies among courtiers had been gambling, and despite evident infringement of religious laws, commitment to religion was silently observed.However, by increasing of the popularity of games, skill and mastery in one of the games brought about social prestige for the player, therefore, dignitaries strived to know more and achieve more dexterity in games.Thus, a particular culture formed in the society following game gatherings. In addition, it is not exaggeration to say that one of the factors causing the socio-cultural deterioration of Qajar era was this very issue and paying excessive attention to amusements which usually ended in losing considerable sums, and the amount of time which had to be allocated to attending to people's affairs and problems was wasted with hours of continuous playing. . As you can see, Ahmad Khan, by playing with words in his poem, humiliates enemy troops; in yet another case, in the dispute between Abdul Ali Mirza, son of Farhad Mirza the Itemad-u Saltaneh, and Jahan Shah Khan Afshar Khamsei, Khan Afshar rebukes Abdul Ali Mirza with the following lines:
3,943.8
2012-12-28T00:00:00.000
[ "Economics" ]
Experimental study of the internal flow in freezing water droplets on a cold surface The study of a freezing droplet is interesting in areas, where the understanding of build up of ice is important, for example, on wind turbines, airplane wings and roads. In this work, the main focus is to study the internal motion inside freezing water droplets using particle image velocimetry and to reveal if mechanisms such as natural convection and Marangoni convection have a noticeable influence on the flow within the droplet. The flow has successfully been visualized and measured for the first 25% of the total freezing time of the droplet when the velocity in the water is the highest and when the characteristic vortices can be seen. After this initial time period, the high amount of ice in the droplet scatters the PIV light sheet too much and the images retrieved are not suitable for analysis. Initially, it can be seen that the Marangoni effects have a large impact on the internal flow, but after about 15% of the total freezing time, the flow turns indicating increased effects of natural convection on the flow. Shortly after this time, almost no internal flow can be seen. Introduction Icing on wind turbines, airplane wings and roads is a major problem, since it often requires maintenance at high costs and may even cause damages and fatal accidents. There are many effective icing prevention methods to avoid these types of problems, but there are still a lot that is unknown about the ice itself. In this work, focus is on the flow within droplets that freeze on impact on the surface. The overall freezing process of water droplets placed on cold surfaces has been visualized experimentally, using high-speed cameras (e.g., Jin et al. 2012Jin et al. , 2013bEnríquez et al. 2012). Results presented include the volume expansion and freezing time during different ambient conditions. The volume expansion of the droplet results in a pointy tip and has been investigated both experimentally and numerically (e.g., Anderson et al. 1996;Enríquez et al. 2012;Snoeijer and Brunet 2012;Marín et al. 2014;Nauenberg 2014;Schetnikov et al. 2015). Anderson et al. (1996) studied a freezing droplet using a model that was able to reasonably capture the experimental solidified droplet with the cusp-like tip and inflexion point. Snoeijer and Brunet (2012), Marín et al. (2014) and Schetnikov et al. (2015) proposed numerical models to predict the angle of the conical tip and capture the volume expansion of the droplet. Zhang et al. (2017Zhang et al. ( , 2018a experimentally studied and statistically analyzed the icing nucleation characteristics of sessile water droplets. Zhang et al. (2018b) created a numerical model considering the supercooling effect and volume expansion effect and showed that they could reduce the deviation of freezing time from 30 to 10% for hydrophilic and hydrophobic surfaces. By considering different cooling surfaces, interesting effects can be observed. For anti-icing purposes, a superhydrophobic surface can be used (e.g., Liu et al. 2007;Jin et al. 2013a;Oberli et al. 2014). For the interested reader, Oberli et al. (2014) presented a review on the freezing of a water droplet on these surfaces and their use in anti-icing applications. Other types of surfaces have also been considered (e.g., Huang et al. 2012;Jin et al. 2013a;Hao et al. 2014;Chaudhary and Li 2014). Huang et al. (2012) investigated the impact of contact angle (created using hydrophobic surfaces) on a freezing water droplet and concluded that a larger contact angle gives a longer freezing time. Hao et al. (2014) studied the freezing delay time and freezing time of a stationary droplet on surfaces with various roughness and wettability. They found that the surface roughness plays an important role for the nucleation. Jin et al. (2015) used an inclined cold surface to investigate the effects of droplet size and surface temperature on the impact, freezing, and melting process of a water droplet. Chaudhary and Li (2014) considered surfaces with different wettability that was subjected to rapid cooling to study the freezing process. They revealed that the freezing time is dependent on the droplet temperature at the pre-recalescence time and the surface wettability. Kawanami et al. (1997) used a numerical model considering both surface tension and the density maximum at 4 °C. The results were also validated with experiments. They found that both natural and Marangoni convection are important mechanisms for the internal flow. To the authors' knowledge Kawanami et al. (1997) is the only paper published considering the internal flow in a freezing water droplet experimentally. This suggests that there is a need to initiate additional experimental studies on the impact of the internal flow on a freezing water droplet. Even though the internal flow during freezing of droplets seems to be a rather unexplored area, some studies have been performed on evaporation of droplets. Jin (2008) performed flow velocity measurements within water droplets using particle image velocimetry (PIV), at surfaces with temperatures ranging from 0 to 21.9 °C. The internal flow within the droplets was disclosed and velocities of the water estimated. Other authors also studied the internal flow numerically and experimentally (e.g., Ruiz and Black 2002;Hu and Larson 2006;Bhardwaj et al. 2009;Christy et al. 2010) with good results and additional work has been carried out on the effect from the surrounding flow (Ljung et al. 2015). More recently, Ma et al. (2019), investigated the flow field around the droplet using PIV and could from this calculate the aerodynamic force and adhesion force over the droplet. Hu and Larson (2006) and Xu and Luo (2007) aimed to investigate the influence from the Marangoni effect on the internal flow. It was, however, revealed that the Marangoni number in their setups is 100 times lower than the theoretical one, which resulted in difficulties in capturing any influence on the flow from the Marangoni effect. This was corrected using clean interfaces, free of surfactants. Due to the success in the visualization of the internal flow in droplets on heated surfaces, this motivates the study of the internal flow inside freezing water droplets. The experimental method chosen in this work is PIV, which is a versatile nonintrusive, laser-based method, used in many applications to study fluid motion, e.g., PIV has been proven to be a reliable method when studying the internal flow in evaporating droplets (Jin 2008). The aim of this work is, therefore, twofolded: (1) to show that PIV can be used to visualize the movement of the water and estimate the velocities inside a freezing droplet and (2) to reveal if Marangoni effects and/or natural convection have a noticeable influence on the flow within the droplet. Experimental setup Droplets were gently deposited on a 50.8 mm (2″) in diameter and 5 mm thick, cold sapphire plate (aluminium oxide, Al 2 O 3 ) using a pipette. The motivation for using the sapphire plate as the cooling surface is the transparency in combination with a high thermal conductivity ( k = 46 W/mK , Incropera et al. 2013) allowing the laser sheet to pass through the droplet from beneath. The high k implies a faster cooling of the surface in comparison to other transparent materials as, for example, plate glass or pyrex glass with k = 1.4 W/mK (Incropera et al. 2013). The pipette (ThermoFisher, Finnpipette F1 1 to 10 μL, micro) was set at releasing 10 μL droplets and was kept in place by an optical rail to make the position of the pipette tip repeatable. The cooling surface was resting on an aluminium holder in direct contact with a Peltier cooler with the warm side submerged in a box with cold circulating water (closed system, connected to a tank with ice water, held at T = 3.9 ± 1.3 °C). To guide the incoming light underneath the droplet, a prism was placed in a central hole of the aluminium holder and a channel in the holder allowed the light sheet to pass through to the droplet. A PMMA (plexiglass) chamber was positioned around the sapphire plate to protect the area around the cooling surface from disturbances from surrounding airflows in the room. Four holes were made on the chamber; one on the top for the pipette to enable the release of the droplet to the surface, one on the right side to allow the laser light go directly to the prism, one on the left side for the camera to get a clear view of the setup and one on the back to mount the hygrometer to the plexiglass chamber. Before each experiment, the sapphire plate was cleaned using propanol (C 3 H 7 OH) and deionized (DI) water, and then dried with lens paper. A schematic diagram of the experimental setup can be seen in Fig. 1. The liquid used in the experiments was DI water kept at room-temperature, T water = 21.1 ± 1.4 °C and was seeded with fluorescent particles of Rhodamine B (microParticles GmbH PS-FluoRed, aqueous dispersion 25 mg/mL) with a diameter of 3.16 μm. The Stokes number (Stk) was calculated to determine if the particles were suitable for these experiments. The relaxation time of the particles varies between 0.33 and 0.59 μs for the setup, a typical diameter is 3.42 mm (case 1 in Table 1) and the velocity in the 10 cases varies typically between 0 and 1 mm/s. This results in a Stk < 5 × 10 −7 or, for the most part much less. Since Stk ≪ 1 , the conclusion is that the particles follow the flow well. The amount of particles in the water was based on the guidelines > 6 particles per interrogation area. The concentration of particles was eventually set to 0.98 ml DI water and 0.02 ml seeding particle suspension. 20 droplets were weighted on a highly sensitive scale and the droplet mass was found to be 9.98 ± 0.13 mg. The temperature of the air inside the plexiglass chamber and the temperature of the sapphire plate were monitored by two thermocouples of T and K-types, respectively. The laser was a continuous 50 mW 532 nm Nd:YAG (Altechna Co Ltd) connected to a half waveplate, a polarizing beam splitter (cube) and a beam dump. These components were used to adjust the amount of light transmitted to the droplet. A cylinder lens assembly from Dantec Dynamics created and focused the light sheet to a thickness of < 0.4 mm thick. A 12 mm thick window placed on a rotation table was used to fine tune the position of the sheet to the center of the droplet (up and down to be able to adjust the light sheet to move sideways in the droplet), see Fig. 1. The benefit of illuminating the droplet from beneath is that it guides the laser light in a better way then if, for example, the light was coming from above, which would scatter the light more. Another advantage with the chosen tactic is that it allows for a good view of the symmetry line in the droplet. A CMOS camera (IDS Eye) with a spatial resolution of 1280 × 1024 pixels and pixel size 5.3 × 5.3 μm 2 together with a Navitar long distance microscope captured images of the particles. The measurements were performed at a frequency of 50 Hz and with an exposure time 5 ms during the full freezing process. The recording times were dependent on the freezing time for the droplet defined, as the time from when the droplet hits the surface until the pointy tip has appeared. The droplet was released from a distance of 3.9 mm above the surface. This height was carefully determined, since it was the shortest possible distance for the droplet to be completely released from the pipette before touching the surface. The velocity of the droplet when it hit the surface was about 77 mm/s. Frost formation Initial experiments indicated that a roughness on the cooling surface was necessary to initialize the freezing process of the droplet without delay otherwise the droplets would become supercooled instead. Since the sapphire plate is very smooth, this roughness had to be added to start the nucleation within the droplet. Frost is relatively easy to create and is more or less always present when working with freezing temperatures in nature due to the relative humidity in the air; therefore, it was used to generate the required roughness on the surface. The frost was created by letting pressurized air pass through a container filled with water and into the closed chamber surrounding the cooling surface. A regulator was used to adjust the flow rate of the incoming air and as the air passed the water container it was humidified. The temperature of the surface, as well as the air and the relative humidity inside the chamber was monitored to create a layer of frost as similar as possible during each freezing. These values were, T plate = − 8.08 ± 0.12 °C, T chamber = 16.7 ± 1.7 °C and RH = 50.4 ± 4.5% , respectively. Experimental procedure The following statements can briefly summarize the experimental procedure: 1. The cooling of the surface starts. 2. At T plate = −8.0 °C (visually determined from the computer screen), the pressurized air was switched on and turned off again when RH was around 50%. This took about 60 s. 3. The pipette was filled with the DI water and seeding particle suspension. 4. The camera and the laser were switched on. The position of the light sheet was fined tuned to the centre of the droplet (while the droplet was still hanging from the pipette). 5. The droplet was released when T plate reached −8.0 °C again (visually determined from the computer screen), which occurred approximately 30 s from when the pressurized air was switched off. 6. The cooling was turned off when the droplet was completely frozen. 7. The surface was cleaned and dried when T plate > 0.0 °C and at T plate = 4.0 °C a new experiment could begin. Data processing The images were processed in the GUI-based open-source tool, PIVlab, for DPIV analyses in MATLAB (Thielicke and Stamhuis 2014). Four image pre-processing techniques were used to improve the measurement quality and to facilitate the visual inspection: contrast-limited adaptive histogram equalization-to enhance contrast in the image, high-pass filtering-to sharpen the image and remove background signal, intensity capping-which reduces the increased influence of very bright particles and Adaptive Wiener-for noise reduction (Thielicke and Stamhuis 2014). A multi-pass correlation scheme with decreasing window size and window off-set was used to calculate the particle displacements, the interrogation window size was 64 × 64 pixels (first pass) decreasing to 32 × 32 pixels (second pass) and finally 16 × 16 pixels (third pass) with adaptive window shift, all with an overlap of 50%. The standard FFT algorithm was used for the cross correlation, with a three-point Gaussian peak fit to estimate the sub-pixel displacement. A standard deviation filter and a local median filter were also applied at this point. A known issue when working with PIV measurements on droplets on surfaces is that the refraction of light at the droplet surface creates a problem when measuring the flow field inside the droplet. This can be corrected by a mapping function between the points on the image plane and object plane using a method derived by Kang et al. (2004) and Minor et al. (2007). This is based on the ray tracing method and can be divided into two sub-methods, the image mapping method and the velocity mapping method. The first uses a technique to restore distorted images, and the second maps the velocity vectors from the original particle images directly onto the object plane without image restoration. Since the velocity mapping method is recommended by Kang et al. (2004) it is applied in this work. The first step in the method is to find the edge or boundary of the droplet using a least-square curve fitting. This marks the area for where the velocity mapping method is used. The images of the freezing droplets are then evaluated using the PIV-software and exported into MATLAB. Using the data from the edge detection and the data from the PIV-software, the velocity mapping method then directly map the velocity vectors obtained in the image plane to the object plane and the output is a corrected image of the velocity field in the droplet. Note that when using this method the centre region is well restored and provides accurate flow images of this region, but it does not work well in the outer region when o ∕R > 0.75 , where o is the radial distance from the centre to a point in the bottom plane of a hemispherical lens and R is the radius of the hemispherical lens. This means that there will be an area close to the edge that will be difficult to restore and retrieve a good image of because of total internal reflection (Kang et al. 2004). By preliminary studies it could be seen that most of the action takes place in the beginning of the freezing process and that most of the movement has stopped almost completely within in a few seconds. Also, after this period of time the ice scatters the light too much and the images retrieved are not suitable for analysis. Therefore, the results presented here consider the first 25% of the full freezing time. The droplets The height, radius, contact area to the cooling surface and freezing time of 10 droplets that was deposited in the setup presented above, varies according to Table 1. The mean and standard deviation of these are 1.7 ± 0.2 mm , 1.7 ± 0.3 mm , 9.0 ± 2.7 mm 2 and 24.6 s ± 11.8 s , respectively. The height of the droplet is defined as the height between the cooling surface and the apex of the droplet along the symmetry line, the droplet radius is defined as the radius at the cooling surface and the contact area to the cooling surface is calculated using the radius of the droplet. Uncertainty analysis Uncertainties during the measurements can be divided into two categories-systematic, or bias, errors that arise usually from to the measuring instruments and random errors due to unknown or unpredictable changes in the experiments (Coleman and Steele 2009). The main sources to systematic errors originates, for example, from the pipette technique, i.e., including errors in the mixing of water with the seeding particles, the reading of the instruments during freezing and the positioning of the camera. These errors are difficult to detect, since they always force the result in the same direction and may, therefore, impact the outcome of the experiments significantly, but careful planning and execution of experiments can minimize these errors. The correlation error is normally 0.1 pixel (Westerweel 2000). The random errors stem mainly from the release of the droplet or from the frost layer created on the surface that results in droplets with different contact angles and because of this different freezing times. To get an estimate of the random errors a repeatability test can be performed, see Coleman and Steele (2009). Ten experiments are considered, where the magnitude of the corrected velocities in the y-direction along the symmetry line between bottom and apex of the droplet is studied. Since the droplets vary in height the interesting points is what happens just above the freezing front, i.e., the lowest value of the corrected data set and at the top of the corrected data in each case. Also, the points at 25, 50 and 75% above the freezing front is determined, see the inserted figure in Fig. 2 for the locations exemplified for case 1. The freezing front is defined as the area, where the water and ice meet in the droplet. To minimize the effect of freezing time the times chosen for the repeatability test are scaled with the total freezing time. The times studied are t = 0.09 t total and 0.19 t total , i.e., 9 and 19% of the full freezing times, and these times are chosen because of the different direction of flow (up and down, respectively, along the symmetry line) to be able to study the error at two different conditions during the freezing process. In Table 2 the precision errors with a 95% confidence interval for the 5 points chosen are shown and it can be seen that the errors are below, or mostly well below 1.5%, suggesting that the Fig. 2 Mean velocity in the five points used in the repeatability study at 9% and 19% of the full freezing times in the 10 cases studied. The precision error with a 95% confidence interval in each point is shown with error bars. In the inserted figure, the locations of the points used in the repeatability study are exemplified for case 1. Point 1 is located just above the freezing front and point 5 is located at the top of the corrected data. Points 2, 3 and 4 are found inbetween point 1 and 5 random errors are small regarding the velocities on the symmetry line. This can also be seen in Fig. 2, where the mean velocity in the five points is shown together with the precision error presented using error bars. This means that the velocities in the droplet are in fact comparable in each case despite their differences in appearance, and that one of the droplets can be selected for further study. Results and discussion In this part results regarding the freezing front and the flow field within one droplet is presented and the frost formation is discussed. As exemplified in Fig. 2, the overall flow field in the 10 droplets is comparable, and therefore, one droplet (case 1) can be chosen for further study. The freezing front The height of the freezing front, defined as the area, where the water and ice meet in the droplet, is studied in the performed experiments in three cases (case 1, 5 and 10) and compared to experiments by Jin et al. (2013b) and simulations by Karlsson et al. (2018). Note that the setups are somewhat different and that the volumes of the droplets are not identical. The volumes of the droplets in the experiments performed by Jin et al. and the simulations by Karlsson et al. are 9.32 μl as compared to 10 μl in this work (for exact details about the droplet sizes in this work, see Table 1). Both the times and the heights are scaled with the total freezing time and the total height of the droplet for each case. In Fig. 3 it can be seen that the freezing front is approximately linear for the first 25% of the freezing process investigated (data up to 35 and 38% of the freezing processes is used from the works by Jin et al. 2013b;Karlsson et al. 2018, respectively) and that the similarities between the three cases are strikingly close despite the differences between them. This suggests that the freezing is caused by pure heat conduction from the cooling surface. Internal flow patterns in the freezing water droplet At t = 0 s , the droplet hits the surface and during the first second the water in the droplet moves in a chaotic way, influenced by the release from the pipette and the collision with the surface. During this time period, no data is extracted from the images. At t = 1 s ( t = 0.06 t total ), it is clear that the freezing has already started and vortices emerge on both sides of the symmetry line of the droplet moving up along symmetry line and down along the droplet-air surface to the cooling surface, see Fig. 4a. When the droplet has frozen to about half its volume (at approximately t = 0.25 t total ) these vortices start to decay in strength and the flow is slowed down. As the freezing continues, no vortices can be seen and there is almost no flow in the remaining water. In Fig. 5, the magnitude of the velocity in the y-direction along the symmetry line at t = 1 to 4 s ( t = 0.06 t total to 0.25 t total ) can be seen. The highest velocities can be found in the beginning of the freezing process, but at t = 2.5 s ( t = 0.16 t total ) the flow starts to shift, see Fig. 4d, where the flow becomes chaotic and this is also reflected in the velocity, since it approaches zero at this point. After this time, the flow has changed direction and an increase in velocity can be seen again at t = 3 s ( t = 0.19 t total ) before it slows down again at t = 4 s and the velocities approaches zero. In Fig. 6 the magnitude of the velocity in the y-direction along four lines in the droplet are displayed, corresponding to the times in Fig. 4, and here it is possible to see how the velocity varies throughout the droplet. Note that the lines are at different locations at each time and that the velocity scales varies. When t = 1 s , it can be seen that the highest velocities are found closer to the surface of the droplet, but just before the shift in direction of the flow the velocities are evened out in the droplet. After the shift (at t = 2.5 s ), the highest velocities can again be found closer to the surface, but shortly after the velocities are once more evened out in the droplet. If following the theory regarding Marangoni convection, the vortices are driven by the differences in surface tension arising from the large temperature differences at the droplet surface. According to Kawanami et al. (1997) the Marangoni convection has larger effect on the flow than natural convection caused by density differences. The initial temperature difference in the freezing water droplets is about 30 °C between the water and the cooling surface. The surface tension for water in contact with air increase with decreasing temperature, which means that the water will be pulled to regions with higher surface tension (hence low temperature). For the case of a freezing droplet, this means that the water Jin et al. 2013 Simulated data by Karlsson et al. 2018 Fig . 3 Height of the freezing front at different times scaled with the corresponding droplet heights and freezing times in three works: from the present experiments (case 1, 5 and 10), experimental data by Jin et al. (2013b) and from simulations by Karlsson et al. (2018) located higher up in the droplet will be pulled down along the interface towards the cooling surface and then move up again close to the center of the droplet. This behavior is observed up to t = 0.13 t total in all cases, or up to the time t = 2 s presented in Fig. 4. Calculations of Marangoni number attained for these experiments suggest a number well above 10,000 in the beginning of freezing, confirming large influence of Marangoni convection in ideal conditions (Pearson 1958). Marangoni convection in water droplets are, however, known to be highly sensitive for impurities and contaminations (Hu and Larson 2006;Xu and Luo 2007). A possible reason for the interesting phenomenon that appears after t = 2.5 s in Fig. 4, as the flow moves in the direction from the cooling surface along the droplet surface to the top of the droplet (down along the symmetry line) might be due to increased effect of natural convection in the density inversion region of water together with decreased temperature difference at the surface. Even though the temperature distribution in the droplet is not investigated here, it can be assumed that the temperature of the water in the whole droplet at this time (at t = 4 s) has dropped to a temperature close to zero. This assumption is based on the fact that since the mixing of water is great during these first seconds and that the cooling (f) t = 4s (t = 0.25t total ) Fig. 6 Magnitude of the velocity along four lines at t = 1, 1.5, 2, 2.5, 3 and 4 s (i.e., from 6% up to 25% of the freezing time of the droplet) surface is much larger in comparison with the droplet, the water should have had time to cool off during this time period. Moreover, the flow in this time period show large similarities with the natural convection dominated flow modeled by Karlsson et al. (2018), both regarding direction of the vortices and the shape of the freezing front. Two differences between the simulations in Karlsson et al. (2018) and the presented experiments, is the direction of the flow in the first seconds of freezing, as well as the fact that the highest magnitude is retrieved in the absolute beginning of freezing, and not a few seconds into the freezing process as observed in the simulations. This reasoning suggest that natural convection is not the only mechanism inducing internal flow in the freezing droplet. After this point the flow is much slower due to the equalized temperatures in and around the droplet and also the viscosity in the water increases (an increase of about 80%, from T = 21 °C to T = 0 °C) hence this will make the movement of the particles to decrease. As already mentioned, the analysis of the data ends at t = 0.25 t total due to the relatively large formation of ice, which at this point spreads the laser sheet significantly. To conclude this section, note that the internal flow behavior in case 1 has been observed in all of the 10 cases studied in this paper when the time is scaled with the full freezing time. Comments on the frost formation Even though the frost formation is not, in detail, investigated here, some comments about its impact on the experiments should be made. How much frost that is actually created during the experiments is not measured. The time and relative humidity is monitored so that similar conditions are created at each experiment. If too much frost is added to the surface, the light from the laser will be spread out resulting in poor quality of the images retrieved as the droplet freezes. If too little frost is added to the surface, the droplet will not freeze and get super cooled instead. Also, based on visual observations more frost on the surface results in droplets with larger contact angles and longer freezing times. Since the surface is very smooth the frost initiates the ice nucleation that starts the freezing of the droplets. The conclusion is, therefore, that the frost is required to start the nucleation in the droplets, but based on the droplets seen in this work, it is at the expense of repeatability in appearances of the droplets (i.e., droplet height and droplet radius) and because of this also the freezing time of the droplet. More research of the impact of the frost on the freezing of droplets has to be made, but since the interest in this paper is of the inner flow, the current knowledge of the frost is considered sufficient. Conclusions In this work the internal flow within freezing water droplets was investigated using particle image velocimetry. It has been shown that the method proposed can be used to disclose the velocity field in freezing water droplets. To overcome the problems with the refraction at the droplet surface a correction method was used that maps the velocity vectors from the original particle images directly onto the object plane without image restoration. The time frame of interest is the first 25% of the freezing process, since this is when the characteristic vortices appear and the velocity of the water is at maximum due to the large temperature differences in and around the droplet. After this time period the temperature differences are small resulting in reduced velocity of the water and no vortices and also, the ice formation in the droplet spreads the light too much after this point resulting in images not suitable for analysis. Therefore, the results presented in this work only covers this time frame. When using the correction method the inner flow close to the centre of the droplet is well restored and the vortices are easy to track, but the flow close to the droplet surface is not retrieved. Initially the water is moving in the direction from the top of the droplet along the interface towards the cooling surface and then it is moving up again close to the center of the droplet, symmetrical on both sides of the centerline. This indicates that the flow is caused by the Marangoni effect due to the large temperature differences in and around the droplet. After a few seconds the flow has shifted in the opposite direction due to equalized temperatures at the surface and due to an increased effect of natural convection in the density inversion region of water. To the authors best knowledge, this is the first time that this change in flow direction has been visualized in freezing water droplets. To freeze the water droplets a frost layer was created using humidified air. The time and relative humidity was monitored so that similar conditions were created in each experiment. This was sufficient for this work, but the creation of frost is interesting and should be investigated further.
7,809.2
2019-11-13T00:00:00.000
[ "Physics", "Environmental Science" ]
Comparatively light extra Higgs states as signature of SUSY SO(10) GUTs with 3rd family Yukawa unification We study 3rd family Yukawa unification in the context of supersymmetric (SUSY) SO (10) GUTs and SO(10)-motivated boundary conditions for the SUSY-breaking soft terms. We consider μ < 0 such that the SUSY loop-threshold effects enable a good fit to all third family masses of the charged Standard Model (SM) fermions. We find that fitting the third family masses together with the mass of the SM-like Higgs particle, the scenario predicts the masses of the superpartner particles and of the extra Higgs states of the MSSM: while the sparticles are predicted to be comparatively heavy (above the present LHC bound but within reach of future colliders), the spectrum has the characteristic feature that the lightest new particles are the extra MSSM Higgses. We show that this effect is rather robust with respect to many deformations of the GUT boundary conditions, but turns out to be sensitive to the exactness of top-bottom Yukawa unification. Nevertheless, with moderate deviations of a few percent from exact top-bottom Yukawa unification (stemming e.g. from GUT-threshold corrections or higher-dimensional operators), the scenario still predicts extra MSSM Higgs particles with masses not much above 1.5 TeV, which could be tested e.g. by future LHC searches for ditau decays H0/A0→ ττ . Finding the extra MSSM Higges before the other new MSSM particles could thus be a smoking gun for a Yukawa unified SO(10) GUT. Introduction Grand Unified Theories (GUTs) [1][2][3] present an attractive setup for Physics Beyond the Standard Model (BSM). While gauge coupling unification in GUT is necessary for consistency, the unification of Yukawa couplings is optional, depending on the GUT operators generating the Yukawa interactions. Conversely, barring a numerical accident, Yukawa unification at high energies might indicate a bigger gauge symmetry. The most convenient setup for Yukawa unification are supersymmetric (SUSY) GUT models; while supersymmetry helps with gauge coupling unification by modifying the renormalization group (RG) slopes, it can also help with Yukawa unification indirectly via loopthreshold corrections at the SUSY scale M SUSY [4][5][6][7]. The simplest example of some Yukawa couplings unifying would be b-τ unification in the 3rd family within the context of SU(5) GUTs [8]. An even more restrictive and predictive setup is that of t-b-τ (-ν) unification, which is most straightforwardly achieved in SO (10), where all SM fermions of one family, with an addition of a right-handed neutrino, constitute a single irreducible representation 16 of SO (10). In such a setup, the neutrino 3rd family coupling also has the same value as the top, bottom and tau Yukawa coupling, coming from the operator 16 3 ·16 3 ·10, where 16 3 contains the entire Standard Model (SM) 3rd family and the Minimal Supersymmetric SM (MSSM) Higgs doublets are contained in the representation 10. Henceforth, we shall refer to this scenario simply as t-b-τ unification and omit the ν, despite its coupling also unifying. JHEP06(2020)014 In this work we study t-b-τ unification and assume its origin to be in a SUSY SO (10) GUT. Below the GUT scale, we take the effective theory to be a softly broken MSSM. In such a framework, GUT symmetry would impose relations between the soft breaking terms of the MSSM at the GUT scale. The attractive phenomenological feature of such a setup is that Yukawa unification with GUT-like boundary conditions for the soft terms results potentially in a predictive sparticle spectrum. In the most direct "vanilla" approach, SO(10) symmetry would result in all the sfermion mass parameters to unify in a single value m 16 , the soft Higgs masses to unify in m 10 , universal gaugino masses M 1/2 , and a universal factor a 0 for the proportionality between the Yukawa and A-matrices. The only other SUSY parameters in the theory would then be the ratio of the MSSM Higgs vacuum expectation values (VEVs) tan β, and the sign of the coupling µ of the term H u · H d in the superpotential. It is known that for t-b-τ unification tan β has to be large (∼ 50) due to the top-bottom mass hierarchy m t m b . Recall that with no SUSY threshold corrections, the Yukawa coupling ratio y τ /y b tends to run via renormalization group equations (RGEs) to a GUT value of 1.3 (see e.g. [9]), and µ < 0 gives the correct sign in the threshold correction of y b to help lower this ratio to 1, see e.g. [10,11]. For this reason we consider µ < 0 to be the better motivated setup for t-b-τ unification. Interestingly enough, fits to low energy data within this specific setup, at least to our knowledge, have not really been attempted, mostly due to the region being disfavored by RGE estimates showing no electroweak symmetry breaking (EWSB), to be discussed later. In this paper, we investigate this "vanilla" region and find it viable from the point of view of EWSB. Furthermore, we obtain good fits to the low energy Yukawa data and the SM Higgs mass, resulting in a predictive sparticle spectrum. The most striking feature of the entire setup is the prediction of a typically ∼ TeV mass for the additional neutral and charged Higgses in the MSSM, a prediction which is now being tested by the LHC. The extra Higgs prediction is very sensitive especially to top-bottom unification, and is very hard to observe with a bottom-up approach, especially if the mass m A 0 is assumed a priori as in some studies, e.g. [12]. To be more specific in what our setup achieves, and to put our results in context, it is necessary to survey the existing extensive literature on the topic of t-b-τ unification. Many early studies [13][14][15][16][17][18][19][20][21] predate the Higgs mass measurement in 2012, or even the top quark mass measurement in 1995. Beside considering the viability of Yukawa unification, they also had to contend with predicting the top quark or Higgs mass, e.g. [6,21,22], or were considering naturalness based criteria [23]. In the literature, a number of important issues have been identified: 1. The µ term: µ > 0 or µ < 0? The Higgs connecting coupling µ from the superpotential is present in the potential V of the Lagrangian only via |µ|. Assuming no additional CP violation, µ ∈ R, so the choice of the sign of µ is free. JHEP06(2020)014 The main preference for µ > 0 in the literature stems from considerations of the anomalous magnetic moment of the muon g µ − 2, see e.g. motivation in [31]. This was measured to be above what the Standard Model predicts (see e.g. PDG [48]), and µ > 0 would provide a SUSY contribution in the positive direction, potentially explaining the discrepancy. Despite this there are indications that a fit of g µ − 2 for µ > 0 with universal gaugino masses is difficult to achieve in SO(10) [30]. The study of µ > 0 scenarios, typically within parametrization as close as possible to the constrained MSSM (CMSSM, a.k.a. mSUGRA) with universal gaugino masses, furthermore showed that there is a preferred "funnel" region for the soft MSSM parameters [28], and that the universal gaugino mass parameter should be quite small: M 1/2 500 GeV [27]. Consequently, these scenarios prefer a light gluino mg 450 TeV [34] and suggest an upper bound on the attainable gluino mass of around mg < 2 TeV [30], a constraint coming from fitting the SM Higgs mass. Due to the non-observation of such low gluino mass scenarios at the LHC, the possibility of increasing its mass was investigated in subsequent works: it was found in [26] that the gluino mass can be raised to 2-3 TeV by relaxing the Yukawa unification to be approximate at a few % level, or to introduce a split in the squark mass parameters [33]. Note that all these results are specific to the preferred soft parameter region for µ > 0. From the point of view of a fit to the data, however, it was already realized a long time ago that µ < 0 is preferred, see e.g. [38,45], since it gives the correct sign to the threshold corrections to the y b Yukawa coupling. Since the sign of the contribution to g µ − 2 depends on Sign(µM 2 ), see e.g. [49], this prompted a consideration of nonuniversal gaugino masses, see [37][38][39][41][42][43], with M 2 < 0. Such boundary conditions can most conveniently be achieved by considering Yukawa unification within the context of the Pati-Salam symmetry instead of fully unified SO (10), see [13,[35][36][37][38]50] for various Pati-Salam setups and studies of Yukawa unification. Another possible approach to g µ − 2 with µ < 0 is to only demand that the g µ − 2 prediction is no worse than in the Standard Model, see [40]. This last case still considered non-universal gaugino masses due to EWSB considerations, see next point. EWSB and the split between m 2 H d and m 2 Hu at M GUT . Another issue in Yukawa unification models important for their consistency turns out to be electroweak symmetry breaking. In a softly broken MSSM, a necessary condition for EWSB is to obtain m 2 Hu < 0 at the SUSY scale. This is typically automatically achieved by RGE running from M GUT , where this parameter value is positive; the scenario where RG running triggers EWSB is referred to as radiative EWSB (REWSB). Another necessary non-tachyonicity condition, however, also requires m 2 H d > m 2 Hu at the SUSY scale. Assuming the equality m 2 H d = m 2 Hu at the GUT scale, the m 2 H d is driven down faster than the m 2 Hu essentially due to the former having positive contributions to its beta function from both y b and y τ , while the latter has only contributions from y t (and potentially from y ν ), cf. [47]. JHEP06(2020)014 For this reason, most models in the literature introduce a split m 2 H d > m 2 H d already at the GUT scale [27-33, 37-45, 51]. The simplest way to achieve this is by imposing the split ad hoc, which is called "just so" Higgs splitting and assumes m 2 H d,u = m 2 0 ± ∆ at the GUT scale, e.g. [27,29], with the relative split amounting to ∼ 13 %. An alternative mechanism to generate this split is by D-term splitting [17,28,[41][42][43][44][45], which also splits up the other soft scalar masses in a particular way due to D-term contributions to the masses. Attempts to avoid m 2 H d slipping below the value of m 2 Hu have also been studied in the context of adding right-handed neutrinos or introducing a first/third scalar mass split in the GUT boundary conditions, see [51], both options essentially modifying the RGE beta functions for m 2 H d and m 2 Hu . The well known issue regarding REWSB with m 2 H d = m 2 Hu at the GUT scale has been studied in [6,17,52,53], and reiterated later in e.g. [31] based on an approximate expression for m 2 H d − m 2 Hu at low scales taken from [54]. It should be noted, however, that these papers use semi-analytic formulas for RGEs running from the GUT scale to the SUSY scale, which hold only approximately. In the context of the GUT boundary condition m 2 H d = m 2 Hu , successful REWSB was achieved for the case of non-universal gaugino masses [47,55], while the old arguments for the universal gaugino mass case are reiterated. On the other hand, successful REWSB was found for the case of CMSSM with µ < 0 in [46], albeit with only approximate Yukawa unification due to their bottom-up approach of running Yukawa parameters. In contrast to most considerations in past works presented above, we find that exact Yukawa unification with universal gaugino mass terms and m 2 Hu = m 2 H d is in fact possible. We show this explicitly by performing the RGE running numerically; although we use 2-loop RGEs for the MSSM + soft terms for (most) results, the 1-loop RGE solutions already confirm this qualitative picture. While we agree with prior analyses that RG running just below the GUT scale causes m 2 Hu > m 2 H d in the running parameters, this relation reverses later by RG running a few orders of magnitude above the SUSY scale, thus achieving successful REWSB. This holds true at least in a large part of the soft parameter space. Crucially, however, the running value of m 2 H d − m 2 Hu is typically below (1 TeV) 2 at the SUSY scale, causing the extra MSSM Higgs bosons to be the lightest part of the sparticle spectrum. Experimental constraints and considerations. The most obvious type of prediction studied in Yukawa unification models is the MSSM spectroscopy, see [24,36,44,56,57] for studies which focus on this. Studies which fit GUT models to the experimental data usually consider some or all of these constraints. It was found in many specific realizations of Yukawa unification, however, that potential experimental tensions can usually be relieved by relaxing the demand for exact Yukawa unification and impose it only at a level of some %. This essentially works due to relaxing constraints on the superpartner masses. Such scenarios have been dubbed "quasi-unification", see e.g. [25, 35-37, 50, 55, 61, 62]. Alternative setups to improve fits have also been tried, such as splitting the Aterms [63], considering 4 Higgs doublets instead of 2 [64], introducing certain extra vector-like fermions motivated by an E 6 GUT context [65], or introducing an entire vector-like family of SM fermions [66]. In this paper, as motivated earlier, we consider µ < 0 and numerically find a good solution for REWSB despite the relation m 2 H d = m 2 Hu and universal gaugino masses. In the literature, as far as we are aware, the only case directly comparable with ours is in [46], with the limitation that the SM Higgs mass was not yet measured at the time. One of the scenarios they consider successful (including EWSB) is the CMSSM (implying universal gaugino masses and no GUT split between m 2 H d and m 2 Hu ) with µ < 0. They use, however, a bottom-up approach for Yukawa RGE, and therefore consider only the quasi-unification scenario with a parameters scan. They consequently do not find the low MSSM Higgs mass effect, since it is very sensitive to exact unification, as we show in this paper. Given the effect of the low extra Higgses we study in this paper, the most acute experimental constraints would come from two possible sources. The first is the B s → µ + µ − decay, with the extra Higgs contribution estimated as, see e.g. [67], compared to the PDG measured value of (3.2 ± 0.7) · 10 −9 [48]. The second constraint is the increasingly competitive LHC searches for ditau decays H 0 /A 0 → τ + τ − of the neutral MSSM Higgses, see [68,69], with current bounds implying m A 1.5 TeV (for tan β = 50). Given this most recent estimate and future trends of bounds, we find the ditau search to be comparable or more stringent than the B s → µ + µ − process; we thus focus only on the ditau decay in this paper for simplicity. The other parts of the SUSY spectrum in our setup are heavy, larger than 4 TeV for gluinos and squarks, far above the present ATLAS and CMS bounds but within reach of future colliders such as the FCC-hh or SppC. The organization of the paper is as follows: in section 2 we introduce our notation and conventions, and analyze the salient points regarding EWSB and the masses of the extra Higgs bosons in the MSSM. In section 3, we perform an RGE analysis of the quantity m 2 H d − m 2 Hu relevant for both those aspects and perform a sensitivity analysis to deformations of various parameter relations around an example point. In section 4, we perform a more general investigation of the CMSSM parameter space and show that the masses of the extra Higgses are predicted to be low in general. Finally, in section 5, we analyze how JHEP06(2020)014 constraints from the LHC challenge exact Yukawa unification and how a quasi-unification scenario helps in this regard. Then we conclude. For completeness, we also include two appendices. In appendix A the general 1-loop RGEs for a softly broken MSSM with righthanded neutrinos are presented. In appendix B a simplified version of the RGEs neglecting the Yukawa couplings of the first 2 families is given. The indices i and j are family indices, the SU(2) contractions between doublets are denoted by a dot and defined by Φ · Ψ ≡ ab Φ a Ψ b with 12 = − 21 = 1, while the SU(3) indices are suppressed. Also note that a left-chiral superfield Φ c contains the charge conjugated fermion field ψ † , as well as the conjugated complex scalar fieldφ * R . The soft-breaking terms consist of gaugino mass terms, the scalar trilinear A-terms, the scalar soft-mass terms, and the b-term: JHEP06(2020)014 We labeled the SU(3) C , SU(2) L and U(1) Y gauginos by λ a 3 , λ b 2 and λ 1 , respectively. The tildes above the fields indicate the scalar component of the superfield, with the exception of H u and H d , which also indicate scalar parts. The neutral components of H u and H d each acquire an EW breaking VEV: which -motivated by EW symmetry breaking in the SM -are parametrized by This leaves tan β as the only free parameter, and v u , v d ∈ R. Minimization of the potential with respect to the electrically neutral components H 0 u and H 0 d of SU(2) doublets leads to a (tree-level) vaccum solution Note that we have solved the vacuum equations for the superpotential parameter |µ| 2 and the soft parameter b, while treating the unknown VEVs v u and v d as independent variables, appearing implicitly via v u /v d = tan β. In the large tan β regime, we can make the approximation implying that a solution to EWSB (at tree level) is possible only if the soft mass parameter is negative at the energy scale of computation, i.e. m 2 Hu < 0 at the SUSY scale. After EW symmetry breaking, 3 real scalar degrees of freedom in H u and H d become part of the longitudinal components of the massive gauge bosons W ± and Z 0 via the Higgs mechanism, leaving 5 real degrees of freedom to be physical. We label them in the standard way by h 0 , H 0 , A 0 , H + and H − , where their superscripts denote their EM charge. The low mass Higgs at 125 GeV is denoted by h 0 , while H 0 and A 0 denote heavier neutral scalars with even and odd parity P , respectively. We get the following well-known expressions for their tree-level masses: (2.14) Considering the regime m 2 A 0 m 2 Z , m 2 W leads in leading order to showing that all extra Higgs particles H 0 , A 0 and H ± are near the scale m 2 A 0 . The scale of m 2 A 0 in turn depends on the vacuum solution for |µ 2 |; combining eq. (2.11) and (2.8) gives the tree level value We see that, crucially, the scale m 2 A 0 depends on the difference m 2 H d − m 2 Hu of the masssquare soft parameters. In the large tan β regime, this approximates to so that a non-tachyonic tree-level mass for A 0 requires m 2 Hu as a necessary condition. We now briefly turn to a discussion of the scale of masses at 1-loop level. The vacuum solutions at 1-loop become (see [71,72]) The hatted quantities, includingm 2 W for later convenience, are defined bŷ where t u and t d are 1-loop tadpole expressions, and Π T ZZ and Π T W W are the transverse Z and W -boson 1-loop self-energies. The hatted massesm 2 Z andm 2 W are the 1-loop masses computed in the DR renormalization scheme. Their explicit expressions can be found in [71] and will not be reproduced here. For a consistent loop calculation, the quantities in the expressions for 1-loop corrections can be taken to be the parameters at tree-level. When the quantities in the superpotential of eq. (2.3) are complex, the neutral states h 0 , H 0 and A 0 mix: with the 1-loop correction, the masses may no longer be CP eigenstates. JHEP06(2020)014 We shall not be considering complex phases in the SUSY parameters, so this complication need not be considered. Due to the breaking of CP symmetry at next to leading order in the general case, rather than the mass m 2 A 0 ,tree from (2.16), a more convenient quantity to consider is the mass of the charged Higgses H ± , since the charged Higgses H ± have no other states to mix with. The expression at 1-loop order for the mass of H + is known to be with Π H + H − denoting the self-energy of H ± , see [71]. RGE analysis of m As a first step in assessing models with Yukawa unification and SO(10) boundary conditions for soft parameters, we study the RG running of the quantity m 2 H d − m 2 Hu . This quantity must be positive at the SUSY scale, a feature crucial for EWSB, and its magnitude sets the mass scale of the extra MSSM Higgs states H 0 , A 0 and H ± , as was discussed in section 2. An often cited requirement in the literature for REWSB to occur is a split in the GUT scale boundary conditions for m 2 H d and m 2 Hu , see section 1 and references therein. We show here, however, that such a split is not necessary, since we obtain m 2 H d − m 2 Hu > 0 at the SUSY scale regardless. The value of this difference, however, is small compared to the magnitude of each term, implying low lying extra Higgs states in the MSSM, an effect that we show to be especially sensitive to t-b unification. To facilitate the RGE analysis, we make use of simplified RGEs at 1-loop and CMSSM boundary conditions, as explained in separate subsections below. Note that these simplifications are specific to this section of the paper and do not change the general conclusions, confirmed by comprehensive analyses in later sections by use of 2-loop RGEs and SO(10) motivated boundary conditions. The analysis of the simplified case nevertheless gives valuable insights into EWSB and the low spectrum of the extra MSSM Higgses, confirming that this striking feature can be understood as an RGE effect, and is seen already at 1-loop order. The simplified boundary conditions -CMSSM In this section we make a slight simplification and consider the CMSSM boundary conditions (see e.g. [73]) as the default scenario, instead of the SO(10) motivated split in the sfermion and Higgs soft masses to be studied later. We also study how RG running changes under various deformations of the default CMSSM boundary conditions, obtaining a number of important conclusions applicable to the more general scenario beyond CMSSM. More explicitly, we assume the following for the RGE analysis in this section: • The boundary conditions are set at a high energy: M GUT = 2 · 10 16 GeV. • The MSSM is extended by right-handed neutrinos at a scale M R , with M R ≤ M GUT , below which they are integrated out. • The boundary conditions of the soft parameters are those of CMSSM: The RGE boundary conditions for the soft parameters are thus parametrized by the 3 CMSSM parameters m 2 0 , M 1/2 and a 0 . • Unification of 3rd family Yukawa couplings at the scale M GUT : The above assumptions are a simplified version of the "SO(10) boundary conditions" with only one soft scalar mass parameter m 0 and with universal sfermion soft matrices (typical leading order pattern in "flavored GUTs" with family symmetry): the constraints are implied in the unification of all fermion sectors, and t-b-τ unification arises in the simple case when the Yukawa contribution to the 3rd family of 16 F comes from the 16 F 3 ·16 F 3 ·10 H operator in SO (10). We note that although the stated class of SO(10) models gives rise to the MSSM setup described below M GUT , we do not necessarily commit to a particular SO(10) UV completion. In this context, we would also like to remark that the exact Yukawa unification will be subject to model-dependent corrections such as e.g. GUT threshold corrections, which however depend on the details of the UV completion. We will study the effects of such perturbations of the scenario later in the paper. The simplified 1-loop RGE The complete set of RGEs for the neutrino-extended and softly-broken MSSM are given in appendix A (also cf. [71]). The full RGEs can be simplified by eliminating some degrees of freedom which are either numerically irrelevant or unnecessary for our considerations. In the quark sector, for example, there is little mixing, and the Yukawa matrices in both quark JHEP06(2020)014 sectors as well as the charged lepton sector have hierarchical masses. A good approximation is therefore to consider only the 3rd family of fermions. Also, we assume family universality in all sfermion mass matrices at the GUT scale. To simplify the RGE, we consider the minimal amount of variables consistent with the above assumptions. It turns out that the following 28 variables in the RGEs are required: • The 3 gauge couplings g 1 , g 2 and g 3 . • 4 Yukawa couplings of the 3rd family y t , y b , y τ , y ν . • The 6 × 2 + 2 soft mass parameters: m 2 x i , where x ∈ {Q, L, u, d, e, ν} and i ∈ {1, 3} are independent, and the Higgs mass parameters m 2 H d and m 2 Hu . The case i = 2 does not have to be studied separately since, in our setup, the i = 2 quantities have exactly the same running and boundary conditions as those for i = 1. The resulting simplified 1-loop RGE are presented in appendix B, which contains also more details on the above variables, cf. eq. (B.1)-(B.5). Making use of the RGEs from appendix B, the running of the expression m 2 Hu is then determined to be where c 1 is the loop factor and S is a linear combination of soft masses: We see that the first 4 terms of the result in eq. (3.6) are analogous to each other, the quantities in the terms correspond respectively to the particles b, t, τ and ν τ (and their superpartners). Each term contains the modulus-squared of its Yukawa coupling, and the factor next to it contains a modulus-squared of the appropriate A-term factor, as well 3 more terms with the soft masses of particles present in the corresponding superpotential Yukawa term. The b and t terms have an additional numerical factor 3 compared to τ and ν due to the 3 possible SU(3) colors they can take. Crucially, the terms also come into the RG beta function with different signs, so it may happen that they cancel. Below the right-handed neutrino mass scale M R , the ν term vanishes. The boundary conditions JHEP06(2020)014 imply that at exactly M GUT , the last term vanishes due to S = 0, and the b and t terms cancel each other, and as well as the τ and ν terms, such that we have As already stated, the scale of the masses of the extra MSSM Higgs bosons will be determined by This same quantity must be positive at low energies also for successful EWSB. It is computed numerically by solving the RGE differential equations of appendix B. We shall often allude to eq. (3.6) for a better understanding of the numerical results, which we now consider. Numerical RGE results We now investigate the RGE properties of the system numerically. To do this as explicitly as possible, we take an example parameter point, whose neighborhood we study. We stress that the conclusions of the RGE behavior in this section nevertheless hold generally, i.e. different example points of Yukawa unification at high energies and consistent with experimental data at low energies yield the same qualitative conclusions, which we checked explicitly by considering different parameter points. Furthermore, we identify the underlying reasons for certain RG behaviors throughout this section, and the generality (where applicable) is also confirmed by results in later sections. We take the following boundary values for the parameters at the scale M GUT = 2.0 · 10 16 GeV: The gauge coupling g 1 is given in the GUT normalization, and M R is the mass of the added right-handed neutrino. The above values are to be understood as boundary conditions for the RGE in appendix B. At the scale M R , the right-handed neutrino is integrated out; JHEP06(2020)014 below this threshold, the RGE are corrected by removing all terms containing y ν . For the example point under consideration, we have taken M R = M GUT so that by default no effects arise due to the right-handed neutrinos, since the y ν term with the large 3rd family neutrino Yukawa coupling is removed already at the GUT scale; its effect is studied separately below. The values of the gauge couplings at the GUT scale are taken from the high-energy data provided by [9], which uses 2-loop RGEs and takes the SUSY scale at 3 TeV; note that their values are consistent with a typical unified gauge coupling value of ≈ 0.7. The overall scale of the soft parameters m 0 , M 1/2 and a 0 has been taken at the order of a few TeV, which tends to be the preferred scale for the fits to low energy data, as will be seen in the next sections. Also, the main effect we are after in this paper is that the extra MSSM Higgs particles are unexpectedly light compared to the SUSY scale, for example 1 TeV; this effect will be obscured if the SUSY scale is also taken to be lighter than 1 TeV, as used to be popular in past SUSY studies. The few TeV scale for sparticles is compatible with (as of yet) non-observation of SUSY particles at the LHC. Note that the chosen point is such that it gives the correct 3rd generation Yukawa couplings y t , y b and y τ at the scale M Z in the MS scheme, based on the data from [9]. An intuitive qualitative description of how the GUT scale parameters control the fit of the 3rd family Yukawa parameters is the following: • The value y 0 controls the overall scale of the 3 Yukawa couplings, and needs to have the value y 0 ≈ 0.5. • The effect of the soft parameters m 2 0 , M 1/2 and a 0 is to control the SUSY spectrum, through which SUSY threshold effects give the correct ratio y τ /y b . • The quantity tan β controls for the ratio y t /y b (alongside SUSY threshold corrections). Low energy data demands a large value of tan β ≈ 50, a well-known feature of MSSM based t-b-τ unification models. We plot the running under 1-loop RGE from appendix B for the various quantities of the MSSM, with the boundary conditions at M GUT given by the example parameter point in eq. (3.11)- (3.20). We shall also investigate the effect of changing one feature of the boundary conditions at a time, understanding its impact; note that we do not evaluate the worsening of the fit to low energy data under such a deformation, since we are for now interested only in the (numerical) effect on the RGE running. We plot quantities in the range [M SUSY , M GUT ]; note that the lower scale is the SUSY scale, since that is the scale where the sparticle spectrum is computed. This scale is also where a match between the SM and MSSM theories is performed, and it is taken to be the geometric mean of the masses of the two stops (computed for our example point using SusyTC [71] to be JHEP06(2020)014 M SUSY = 5901 GeV). While we used a custom computer code for RGE running based on appendix B for greater control, the results were compared and confirmed with SusyTC when applicable. The RGE running of the system, based on the results of the example point, turns out to have the following properties: 1. Running of gauge and Yukawa couplings, gaugino masses and the A-terms. The RGE running of the gauge couplings, Yukawa couplings, gaugino mass parameters, as well as the the A-term factors a x from eq. (B.3) is shown in figure 1. As always in the MSSM, each of the gauge couplings evolves independently from other quantities (at 1-loop level); the couplings approximately meet at ∼ 0.7, and their running values are determined; when the renormalization scale µ r decreases to low energies, g 3 runs upwards and g 1 and g 2 run downwards, see eq. (B.6), due to the signs of MSSM beta coefficients β 3 < 0 and β 1 , β 2 > 0 from eq. (A.24). The running of gaugino mass parameters, according to eq. (B.7), is influenced by the gauge couplings. It is the differences in gauge couplings which drive the gaugino mass-parameter differences from a common boundary point M 1/2 at M GUT . This explains why the gluino mass parameter M 3 increases when approaching M SUSY , while M 1 and M 2 decrease, but all are at a scale of 2 TeV or higher. The RGEs of the Yukawas have two competing contributions to the beta functions, cf. (B.9)-(B.11): a positive contribution from the Yukawas themselves, and a negative contribution from gauge bosons (terms proportional to g 2 i ). The Yukawa couplings can then rise or fall with smaller µ r , depending on whether the gauge or Yukawa contributions to the beta function are dominant, respectively. The 3rd family Yukawa parameters y t and y b rise with lower scale µ r essentially due to the relatively large negative g 2 3 term from the gluons, while y τ stays mostly flat, since realistic unified values of the gauge couplings of ≈ 0.7 and Yukawa couplings of ≈ 0.5 give the Yukawa and gauge contributions approximately equal. The difference between the top and bottom Yukawa, on the other hand, is small and is essentially driven by the |y τ | 2 term in β(y b ) and the difference in the g 2 1 terms in β(y t ) and β(y b ), see eq. (B.9) and (B.10). This ensures a small relative difference y t − y b , with y t > y b at all energies; the very different values of y t and y b at M Z , as implied by the different masses of the t and b quarks, must thus come from the MSSM to SM matching at M SUSY , implying a large tan β of around 50. The RGEs for the A-term factors are given in eq. (B.16)-(B.18). We can see that the difference between a u and a d is essentially driven by the difference between y t and y b , as well as the |y τ | 2 and g 2 1 terms, which essentially already drive the y t and y b difference, as discussed earlier. For this reason, there is again only a small deviation between a u and a d . The slope of a e in absolute terms is smaller due to no gluino related terms, and because of smaller numerical factors in front of the Yukawa terms. JHEP06(2020)014 For m 2 Hu and m 2 H d , the positive Yukawa term contributions to the β functions dominate, leading to a positive slope and thus the parameters becoming smaller and eventually negative with smaller µ r . The drive to m 2 Hu < 0 at low µ r confirms that the EWSB is radiative. Crucially, the necessary condition for EWSB m 2 H d > m 2 Hu is also satisfied at low scales, as will be discussed in more detail later. The soft mass parameters related to the squarks grow fast with smaller µ r due to the large negative contribution of the gluino related terms g 2 3 |M 3 | 2 . These terms are not present in the β function for soft-mass parameters of leptons, so the slepton masses stay almost flat. Another general feature of the soft-mass parameter running is that the masses of the 1st and 2nd family of squarks and sleptons (index 1) become larger than those of the JHEP06(2020)014 3rd family (index 3); we are comparing here the soft-mass parameters of particles of the same flavor, but from different families. The simple reason is the additional positive terms proportional to squares of Yukawa couplings, which appear only for 3rd family squarks and sleptons (since the 1st and 2nd family Yukawa coupling are negligible compared to the 3rd family, and they are set to zero in our simple scenario). We thus have the usual inverted hierarchy in the squark and slepton masses. We now discuss how the scenario of t-b unification and y t = 1.1 y b compare. We see that there is little qualitative difference for the values of any one soft parameter taken on its own. Visually though, major quantitative changes in relative terms can be spotted when comparing the quantity m 2 H d − m 2 Hu in the two scenarios, as well as changes in the quantity m 2 u 3 − m 2 d 3 . These changes might be deemed to have an insignificant effect on the low energy observables. But as shown in the previous section, the difference m 2 H d − m 2 Hu turns out to determine the mass scale of the extra MSSM Higgs bosons. That means that the exactness of t-b unification at the GUT scale, as demonstrated by the two scenarios in figure 2, has a big impact on the sparticle spectrum, i.e. on the extra Higgs sector to be precise. This is the major effect that this paper investigates. Effect of t-b unification on m 2 H d − m 2 Hu . We have seen from the RGE of the soft masses in the previous step that t-b unification 1 has little qualitative effect on the running of these parameters taken in isolation, but has a crucial effect on m 2 H d − m 2 Hu . Figure 3 shows RGE trajectories for m 2 Hu under different y t /y b ratio boundary conditions at M GUT , essentially demonstrating the sensitivity of this quantity to t-b unification. We see that for our example point, the running expression m 2 H d − m 2 Hu increases essentially linearly with the y t − y b difference (at least when relative differences are small), and with a substantial increase already when y t and y b differ at the percent level. The impact is even more dramatic when considered in terms of relative increases of m 2 H d − m 2 Hu : a deviation of a mere 10 % from t-b unification raises the value by a factor 4, and consequently the masses of the extra MSSM Higgs particles by a factor of 2. Looking at this from a reverse perspective, when approaching t-b-τ unification from a t-b deformation direction, the predicted masses of the extra Higgses drop very quickly, typically below 1 TeV. Hu > 0 is necessary for (tree-level) EWSB. We can see in figure 3 that this condition is fulfilled even for exact Yukawa unification (the y t = y b curve), at least for this particular example point. This shows that there exist parameter points with exact Yukawa unification and successful EWSB. It is important to note that a successful EWSB with the m 2 H d = m 2 Hu GUT boundary condition (and universal gaugino masses) was not found in some of the prior literature [6,17,31,52,53] due to extensive use of semi-analytic approximate formulas from e.g. [54], as was of the stop masses. Strictly speaking, the scale M SUSY shifts slightly with different ratios y t /y b , so that comparing the running quantity m 2 H d − m 2 Hu at a fixed scale is not exactly the same as comparing the mass scales of the extra Higgses. This shift, however, is negligible, since the quantities determining the stop masses run logarithmically with µ r and change only slightly with the ratio y t /y b , as argued in the previous analysis step. It is thus justified to compare the running expression for different curves at a fixed scale M SUSY for qualitative considerations. Contributions to β(m 2 H d − m 2 Hu ). To understand the effect that t-b-τ unification has on the RGE running, we consider the various contributions to β(m 2 H d − m 2 Hu ). One can combine the separate RGE in eq. (B.21) and (B.22) into the β function of eq. (3.6). For our example point, where the right-handed neutrinos are already integrated out at M GUT , there are 4 terms: terms proportional to |y t | 2 , |y b | 2 and |y τ | 2 , as well as a term proportional to S, which is a linear combination of scalar soft masses, see eq. (A.25). We plot these contributions for the cases y t = y b and y t = 1.1 y b in figure 4. The results show that in absolute terms the |y t | 2 and |y b | 2 contributions dominate over the |y τ | 2 one at M GUT , an effect which only increases when running to lower µ r , while the contribution from the S term stays numerically negligible throughout and will thus be ignored in the following discussion. The larger contributions of the t and b terms start out due to larger numeric prefactors (due to color) compared to the τ term. Furthermore, at lower energies the Yukawa couplings y t and y b rise with smaller scale, while y τ falls, see figure 1. In addition, also the soft masses show the same trend, see figure 2. Note, however, that these terms in β(m 2 H d − m 2 Hu ) come with different signs; in particular, the t and b contributions have opposite signs. JHEP06(2020)014 It is thus convenient to compare the difference of the t and b terms (red curve) with the τ contribution (green curve), see right panels of figure 4. We shall refer to these two contributions as the t-b and τ contributions, respectively. The t-b contribution comes into the β function with a negative sign, so whenever the red curve dominates over the green curve, the beta function value becomes negative, i.e. the RGE running of m 2 H d − m 2 Hu has a negative slope. Conversely, when the τ contribution dominates and the green curve is above the red, the slope is positive. As the figure shows, the slope is positive at large µ r and negative at small µ r , which is consistent with figure 3. At low enough µ r the t-b contribution is expected to dominate over the τ contribution regardless of the starting y t /y b ratio simply due to Yukawa coupling values at those energies, and that typically the squark soft masses are larger than the corresponding lepton ones. The ratio y t /y b is crucial, however, for the t-b contribution at energies near the GUT scale: when y t = y b the t-b contribution starts at zero, while y t /y b > 1 implies a non-vanishing starting value for the RGE. 2 This crucially impacts the scale at which the t-b contribution becomes bigger than the τ one, i.e. when the red and green curves on the right panels of figure 4 cross. We see that for y t = 1.1y b the t-b contributions already starts out almost as big as the τ contribution at M GUT , so the curves intersect above 10 14 GeV, while t-b unification delays this until below 10 11 GeV. Consequently, with t-b unification the value of m 2 H d − m 2 Hu will be much lower, since the rise in its running value is delayed by several orders of magnitude in the energy scale µ r . This completes our understanding of the effect of t-b unification on m 2 H d − m 2 Hu . Yukawa unification delays when the t-b contribution in the beta function rises enough to dominate over the τ contribution, allowing for the running value of m 2 H d − m 2 Hu to rise much less by the scale µ r = M SUSY . We emphasize that this effect is an indirect consequence of RG running of all parameters, and can thus be seen only when solving for the entire system of RGE numerically and evolving it over multiple orders of magnitude of µ r . In simplified analyses, such as studying the local RG behavior at M GUT by Taylor expansion or taking some running quantities in the beta function as constant to derive a linear-log semi-analytic approximation [54], not even the m 2 H d > m 2 Hu property at low µ r is reproduced, let alone the more subtle effect of the t-b deformation. Effect of Hu . An interesting question now is what impact b-τ unification of couplings has on lowering the value m 2 H d − m 2 Hu . It turns out that while t-b unification is crucial for this effect, b-τ unification is not. We plot the RGE flow of m 2 H d − m 2 Hu for different ∆ τ := y b − y τ = y 0 − y τ in figure 6. The results clearly show that b-τ unification has minimal effect on that quantity at the SUSY scale. The two sets of trajectories on the plot correspond to the y t = y 0 JHEP06(2020)014 2 |yτ 2 (a e 2 + m Hd 2 + m L3 2 + m e3 2 ) case (red-blue) and the y t = 1.1y 0 case (green-cyan); trajectories in the same set differ in ∆ τ from 0 to 0.2, which presents a relative drop in y τ compared to b-τ unification of more than 40 %, but trajectories in the same set nevertheless cluster together at M SUSY , despite diverging at first at intermediate energies. Hu . We see from figure 5 that the scale of right-handed neutrino M R , associated with the large 3rd family neutrino Yukawa coupling y ν , has a comparatively small effect on the value of m 2 H d − m 2 Hu at M SUSY , relative to effect of the t-b deformation. The discontinuous changes in the slope happen at scales when the right-handed neutrino is integrated out, i.e. at the scale M R . We conclude that the right-handed neutrinos do not have a large direct effect on the mass scale of the extra Higgs particles, and we therefore do not include them in the analyses of sections 3 and 4. It should be noted though that an indirect effect turns out to be possible, since their presence shifts the region of parameter space where good fits to low energy data are obtained, see section 5. We investigate whether having a simplified set of CMSSM parameters for the soft term boundary conditions is crucial for having light extra Higges. A more realistic, yet JHEP06(2020)014 We therefore consider a slightly more general case of parametrization for the soft terms, which we refer to as "SO(10) boundary conditions". We keep the M 1/2 and a 0 parameters, but have two different soft mass parameters m 16 and m 10 for the sfermions and Higgses, respectively: (3.25) The notation for m 16 and m 10 signifies which SO(10) representation the scalars of the soft term are part of. It is presumed here that H u and H d come from a 10 of SO(10), which allows for t-b-τ unification with the simple renormalizable 3rd family Yukawa operator 16 3 · 16 3 · 10. We investigate the effect of such an SO (10) Hu at the GUT scale, as is common in the literature [27-33, 37-40, 44, 45, 51], can erase the effect of low extra Higgs masses. JHEP06(2020)014 The above discussion also makes it clear that unless one studies scenarios of t-b unification, usually in the context of t-b-τ unification, this effect will be missed. In particular, this effect will not be present in any kind of SU(5) SUSY GUT model attempting merely b-τ unification. The typical mass scales of the extra Higgs particles In this section, we turn to the broader question of the predicted mass range of the extra MSSM Higgses when considering the entire region of parameter space that yields good fits to low energy data. We The next step is a more precise calculation going beyond the proxy quantity m 2 H d −m 2 Hu , instead considering the masses of the extra Higgs particles directly. We make the following improvements in the analysis for estimating the Higgs masses as accurately as possible: 1. The RGE running of the softly broken MSSM is performed at 2-loop level. The masses of the extra Higgses are computed at 1-loop instead of tree level. To perform such improved calculations, we make use of the following tools: JHEP06(2020)014 • The MSSM Higgs sector is computed to higher loop order by the program Feyn-Higgs [74][75][76][77][78][79][80], version 2.13.0. The output of SusyTC gives the Higgs masses at tree level, with the exception of m 2 H ± given at 1-loop by using eq. (2.21). Using the output values of SusyTC as input for FeynHiggs, the SM Higgs mass is computed to 2-loop and the extra Higgs particles' masses are computed to 1-loop. • For the computation of EW vacuum stability we make use of Vevacious [81]. We use SusyTC to produce an SLHA file, amended with values of the MSSM µ and b terms at tree and loop level, computed from the VIN file of the tree and 1-loop potential for EW breaking produced by SARAH 4.14.1 [82,83]. We use the SARAH predefined model with possible charge breaking via stau VEVs. We use these tools for improved computations of the t-b-τ unification model, where we still consider only the 3rd family Yukawa couplings to be non-vanishing as in section 3, and assume the right-handed neutrinos are integrated out at the GUT scale. The GUT scale values of the gauge couplings are taken to be those from eq. (3.11)-(3.13). We shall consider two scenarios of boundary conditions: the CMSSM scenario (5 parameters) and the SO (10) We take µ < 0 in all cases. The standard notation of CMSSM parameters applies, the parameter y 0 is the t-b-τ unified Yukawa coupling, while m 16 and m 10 are defined according to eq. (3.24) and (3.25). Each parameter point in a scenario allows the computation of the Yukawa couplings at M Z , the Higgs mass, as well as the SUSY spectrum. The part of the SUSY spectrum that is of greatest interest to us is the one of the masses of the extra MSSM Higgs particles; we would like to confirm that due to t-b-τ unification they should indeed be comparatively low. As a first check, we recompute the example point from eq. (3.11)-(3.20) with improvements of higher loop order. The results for the mass prediction of the CP-odd Higgs A 0 are the following: (4. 3) The result I corresponds to the tree level mass from eq. (2.11) and 1-loop RGE, the result II corresponds to tree level mass and 2-loop RGE, while result III is the most accurate with the 2-loop RGE and 1-loop mass from FeynHiggs. We see that the predicted mass reduced after every improvement, which we find happens generically. This confirms that the low MSSM Higgs mass phenomenon persists (and may be further enhanced) even with the improved loop order in the calculation. We now turn to a more general study of the parameter space beyond just the example point. In the subsequent analysis, the 3rd family Yukawa couplings and the SM Higgs mass JHEP06(2020)014 are considered to be observables: As a measure of goodness of fit we make use of the χ 2 function: where the vector x represents the input parameters of the model from either eq. 1 in [9], with relative errors adjusted upwards to 1 % due to limited precision of our RGE procedure from M GU T to M Z . The SM Higgs mass central value was taken to be m h = 125.09 GeV [84], with a 3 GeV error due to theoretical uncertainties in the computation. We show that the prediction of a low extra Higgs mass is a generic feature of t-b-τ unification rather than of just the example point from the previous section. For this reason we search for a number of other points in the parameter space of CMSSM, which provide a good fit of the observables. We do this by a systematic search in the m 0 -a 0 plane of parameters. For a fixed m 0 and a 0 , we perform a minimization of the χ 2 for the other 3 input parameters M 1/2 , y 0 and tan β in eq. (4.1). Remember that these 3 free parameters are used to fit 4 observables of eq. (4.4), which may not necessarily be possible for an arbitrary point in the m 0 -a 0 plane. The computation involves a minimization of χ 2 for each point in a 25 × 37 grid and subsequent interpolation between grid points; the points were taken equidistant and in the range 100 GeV ≤ m 0 ≤ 5500 GeV, − 12000 GeV ≤ a 0 ≤ 6000 GeV, (4.6) and include the edge points of these intervals. As we shall see, this range includes the entire region of admissibly low χ 2 , at least in the CMSSM context. The relevant results of this fit are summarized in figures 9, 10 and 11. We analyze them below: • Figure 9 shows the contours of the minimal attainable χ 2 for a point in the m 0 -a 0 plane, with the shaded region excluding points due to vacuum stability, to be discussed below. Contour regions from blue to white represent points where a reasonable fit can be obtained: the darkest shade of blue represents almost perfect fits of χ 2 < 1, while the white region represents the edge points where χ 2 < 9, such that the deviation in any one observable cannot be more than 3σ. We see that the allowed region in the m 0 -a 0 plane is compact: the ranges are roughly m 0 < 4 TeV, − 12 TeV < a 0 < 5 TeV, (4.7) JHEP06(2020)014 i.e. the regions involve scales of a few TeV. The numeric values of µ at the SUSY scale are computed to be in the interval (−12 TeV, −2 TeV) for all points. • The darkly shaded region in figure 9 corresponds to points in the m 0 -a 0 plane for which χ 2 has been minimized, but the vacuum is not sufficiently stable. The threshold is taken to be at 10× the current age of the universe, but the exponential sensitivity of the lifetime to the bounce action (see [85][86][87]) means that one order of magnitude difference in the threshold does not appreciably change the excluded area. The unshaded region thus represents points with the EW vacuum either being metastable with a sufficiently long lifetime or stable. Note that the instability in the shaded region does not necessarily exclude all possible points with a given m 0 and a 0 , but only the one minimizing χ 2 . Although an improved approach would be to include a sufficiently long vacuum lifetime as a necessary condition in the minimization of χ 2 , this would be much more demanding computationally. Ultimately, the vacuum computation performed here is sufficient to show that most of the low χ 2 region consists of allowed points. • The minimization of χ 2 gives the following ranges for tan β and y 0 for all best-fit points: 48 < tan β < 55, 0.44 < y 0 < 0.50. (4.8) These two parameters thus have small relative changes for best-fit points with different CMSSM soft parameters. The results are compatible with the well-known fact that t-b-τ unification requires tan β ≈ 50, while the unified coupling is approximately y 0 ≈ 0.5. A more interesting input parameter to track for different best-fit points in the m 0 -a 0 plane, however, is the gaugino mass parameter M 1/2 , since this provides the information for all CMSSM soft parameters of the well-fit points. A contour plot of the M 1/2 values is presented in figure 10; this data represents a 2D surface of best (3rd family) Yukawa fits in the CMSSM soft-parameter space of m 0 , M 1/2 and a 0 . Any good fit of t-b-τ unification in the CMSSM would thus be expected to always lie in a compact region around the hypersurface: the m 0 and a 0 values would need to lie in the region of low χ 2 , while the M 1/2 value would need to lie near the one for the best-fit point. Results show that M 1/2 values of most best-fit points with χ 2 < 9 lie in the range between 2.5 TeV and 6 TeV, with the value increasing with increasing m 0 and |a 0 |. • Figure 11 shows the predicted mass m A 0 (at 1-loop) of the neutral CP-odd MSSM Higgs A 0 , which is the main result of interest. Note that CP is not broken at 1-loop, because our parameters do not have complex phases. We see that all best-fit points in the allowed region of the m 0 -a 0 plane give a relatively low mass m A 0 , roughly in the range between 150 GeV and 1200 GeV. Important note: the m A 0 values are given only for the best-fit points, so one should be careful not to interpret the figure as a precise prediction of the CP-odd Higgs mass as a function of only a 0 and m 0 . Note the following important reservation about the results: they merely show the "naive" predicted mass of the extra Higgs particles in the CMSSM model. Potential experimental constraints have not been considered in this plot. In fact, as shall be discussed in the next section, practically the entire region predicted here (assuming exact t-b-τ unification) is under severe stress from ATLAS and CMS searches of H 0 → τ τ . Challenges to t-b-τ unification We have seen in section 4 that the scale of the extra MSSM Higgses is generically expected to be low in t-b-τ unification. The ultimate reason lies in the RG flow of the quantity m 2 H d − m 2 Hu , which was analyzed in section 3, and found to have a relatively small yet positive value, the latter being important for consistent EWSB. In this section, we analyze the predictions of t-b-τ further and confront them with experimental data from the LHC. SUSY spectrum with SO(10) boundary conditions Figure 12. As a first step, we extend the CMSSM scenario to the more general one with SO(10) boundary conditions, where the parameters consist of those in eq. (4.2), while the χ 2 is again defined with the observables of eq. (4.4). The standard deviations are taken as follows: the relative errors of the 3rd family Yukawa couplings are taken to be 1 %, while the error of for the SM Higgs mass is taken to be 2 GeV due to theoretical uncertainties in the computation. This time we compute the overall expectations from this setup (with no fixed parameter values) by computing posterior probability densities of quantities of interest in a Bayesian approach by use of the Markov Chain Monte Carlo algorithm. This paragraph contains some technical details of the computation. The MCMC algorithm was performed with 12 parallel chains, each yielding 1.3 · 10 5 points after discarding the initial bunch of 10 4 in the burn-in period. The total number of used data points is thus 1.56 million. Vacuum existence at 1-loop was checked, but not vacuum stability under EM charge breaking. The result of interest from the MCMC computation is the SUSY sparticle spectrum, which turns out to be quite predictive, due to good fits obtained only in a compact region of parameter space, analogously to section 4. The results are presented in figure 12, where we draw the 1-σ and 2-σ highest posterior density (HPD) intervals for the masses of the sparticles. We use the labelsg for gluinos,χ 0 i for neutralinos,χ ± i for charginos,ũ i for up-type squarks,d i for down-type squarks,ẽ i for charged sleptons andν i for sneutrinos, where the index i goes over different ranges for different types of superpartners, but always corresponds to increasing mass (these are mass eigenstates, so the index i is not directly related to flavor). JHEP06(2020)014 We make the following comments on the sparticle spectrum results: • The lowest part of the SUSY spectrum are the extra Higgs particles H 0 , A 0 and H + . They are expected in the rough range between 500 GeV and 1000 GeV. This reproduces the results for the case of CMSSM from section 4. • The next lightest states are the lightest neutralinoχ 0 1 and the lightest charged sleptoñ e 1 . We see from the expected ranges that the lightest supersymmetric particle (LSP) for some points must be the lightest charged slepton (i.e. the stau) instead of the neutralino. Such points are experimentally problematic, since they would predict a charged LSP as a dark matter candidate. We performed a second MCMC analysis with the added constraint that the LSP must be the neutralino; this addition only minimally changes the quantitative predictions for HPD intervals of the other parts of the spectrum, so we choose not to include a separate plot. • The rest of the spectrum is higher than 2 TeV, with gluinos typically at > 5 TeV. An interesting feature is that the sleptons are expected to have lower masses than squarks. The predicted sparticle spectrum is mostly compatible with the LHC data and searches for these particles, with one notable exception: the extra MSSM Higgs particles. The most stringent constraint comes from the possible ditau decay of neutral Higgses H 0 /A 0 → τ τ . The general scenario relevant in our case is the so called hMSSM [88], which assumes for all SUSY particles other than Higgses to be above 1 TeV. It was shown that specifying only two parameters, tan β and m A 0 , is sufficient to uniquely predict other tree-level quantities. The observed ditau rate is consistent with the SM background, so the non-observation of H 0 or A 0 is summarized by upper bounds on tan β for a given m A 0 in the m A 0 -tan β plane. The latest ATLAS [68] and CMS [69] results on this, using the dataset with 36 fb −1 of integrated luminosity at √ s = 14 TeV, suggest a bound of m A 0 1.5 TeV at tan β ≈ 50. Based on figure 12, the t-b-τ model prediction for the mass of H 0 and A 0 is clearly in tension with the experimental bounds, at least for most of the otherwise available parameter space. In fact, a search among computed MCMC points showed that the extra Higgs masses in the scenario of SO(10) boundary conditions cannot go much higher than 1200 GeV (since that would incur a severe χ 2 penalty). Comparing the various contributions to χ 2 shows that the tension comes from the SM Higgs mass, which tends to be dragged too high for high values of the extra Higgses. This result is consistent with the upper limit for the best fit points in the more constrained CMSSM scenario, see figure 11; the additional parameter gained by the split of m 0 to m 16 and m 10 in the SO(10) boundary conditions thus does not appear to gain much maneuvering space over CMSSM for increasing the masses of the extra Higgs states. The CMSSM region in figure 11 with high extra Higgs masses is located at small m 0 , i.e. m 0 500 GeV, while a 0 ∼ −5 TeV. This result indicates that exact t-b-τ unification, at least within the SO(10) boundary conditions scenario, is under strain exactly because of the low masses of the extra MSSM Higgses, the very feature pointed out and studied in this paper. Since we are now interested also in the masses of the extra Higgses, we perform the minimization with more observables in the χ 2 . For the input we have the SO(10) boundary condition parameters, now also assuming a possible split in t-b and one right-handed neutrino (the one with the largest Yukawa coupling, i.e. the unified coupling, in the Dirac mass term) at the scale M R , which may now be below M GUT . The other two Majorana type masses of the right-handed neutrinos are again set at the GUT scale. The input parameters are now Deformation scenario parameters: tan β, y 0 , y t , where the unified Yukawa coupling now excludes the top coupling y t : As for the χ 2 , we consider the observables from eq. (4.4), with two additional penalty terms. The first penalty term is associated to the non-observation of H 0 /A 0 → τ τ at the LHC, and is present only if tan β is too high given the value of m A 0 . The expected values of the tan β upper bound and 1-σ upper error of the constraint (extended to bigger errors assuming a Gaussian profile) are taken from figure 10b from the ATLAS analysis [68]. The other penalty basically enforces the neutralino to be the LSP, which turns out to be easily possible. JHEP06(2020)014 We now fix the t-b deformation quantity y t /y 0 −1 and M R , and perform a minimization in the other parameters. We do so for each point in a 7 × 7 grid of equidistant points in the "deformation plane" of y t /y 0 − 1 and M R . The results of the minimized χ 2 (using interpolation of the grid results to show contours) is shown in figure 13. The range of t-b deformations is taken from 0 to 6 %, while the right-handed neutrino scale M R is considered on a logarithmic axis in the range between 10 13 GeV and 10 16 GeV. Note: the points were checked for the existence of the EW vacuum at 1-loop, but not explicitly for vacuum stability due to too excessive computation time. On the other hand, the points are close to points which have been checked with Vevacious, and overall in an unproblematic region with respect to vacuum stability. The numeric value of the µ parameter at the SUSY scale is in the interval (−10 TeV, −6.7 TeV) for all points. All points in the figure have the extra MSSM Higgs particles as the lowest lying states at around 1.3-1.5 TeV in the sparticle spectrum, followed by the neutralino with a mass > 2 TeV. As stated earlier, the main difficulty is the reconciliation of the SM Higgs mass with the H 0 /A 0 → τ τ constraint on extra Higgs masses. The best fit points all have small m 10 , i.e. m 10 < 500 GeV, as in the CMSSM case, but the m 16 -m 10 split now allows for a bit bigger a 0 in magnitude without compromising χ 2 : a 0 ∼ −10 TeV. The results clearly show that the t-b deformation at a few percent level can indeed greatly reduce the tension (for example the blue region in the plot corresponding to χ 2 < 6). This actually happens in two ways: first, it increases the masses of the extra Higgs particles and thus m A 0 (RGE effect), and second, it allows for a smaller tan β of around 46, which also relaxes tension, since H 0 /A 0 → τ τ constraints are in the form of an upper bound on tan β. In addition, figure 13 also shows that the fit is improved by a lower right-handed neutrino scale, but the effect is sub-dominant compared to the t-b deformation. Another important result of the minimization in the grid worth stating is also the following: the best fit points still tend to have the extra Higgs masses at the lower end of the allowed range. The non-deformed points under tension have the Higgs just above 1300 GeV, while the deformed points not-under tension have those masses up to 1500 GeV. Though the ditau constraint did not require them to be higher than around 1500 GeV, this still shows that the deformed points have a preference for lower rather than higher masses of m A 0 . A continuing non-observation of the ditau decay coming from H 0 /A 0 neutral MSSM Higgses at the LHC would thus put the other points under increasing strain as well, requiring an ever larger t-b deformation. Conclusions We considered in this paper t-b-τ Yukawa unification in the context of SO(10) SUSY GUTs with µ < 0. The µ < 0 is the preferred sign for Yukawa unification, since it provides the SUSY threshold corrections to the b quark in the correct direction. Below the GUT scale, a good effective description is a softly broken MSSM possibly extended by right-handed neutrinos (if they are not yet integrated out). The boundary condition for the soft parameters at the GUT scale are assumed to be CMSSM-like, except for an additional split of the scalar soft mass parameter m 0 into sfermion masses m 16 and the mass parameter m 10 of JHEP06(2020)014 the Higgs doublets H u and H d , since these two soft mass parameters involve particles from different SO (10) representations. In particular, the features most important for comparison with the existing literature are exact Yukawa unification as opposed to quasi-unification, Hu at the GUT scale, µ < 0, and universal gaugino masses. We consider the above scenario to be the vanilla setup for Yukawa unification in SO(10), yet this has remained a largely unexplored possibility in the literature, where one or more of our stated assumptions are violated in an important way. The reason for that was a pessimistic outlook on the possibility of REWSB, based on approximate semi-analytic solutions of RGEs. In contrast, we show in this paper that REWSB is in fact possible to achieve by solving the full set of RGEs numerically. The quantity of interest for successful EWSB is m 2 H d − m 2 Hu , which must be positive at the SUSY scale. In the large tan β regime needed for Yukawa unification, this same quantity determines also the mass scale of the extra MSSM Higgs particles H 0 , A 0 and H ± (cf. section 2). We find that the running quantity m 2 H d − m 2 Hu vanishes at the GUT scale due to the boundary conditions, first runs to negative values at lower scales, but the trend then reverses and it results in a positive value at M SUSY . Crucially, this positive value is smaller than might be expected based on the scale of the soft parameters, typically below TeV (when assuming exact t-b-τ Yukawa unification at the GUT scale). This yields a SUSY mass spectrum with the characteristic feature that the extra Higgs states are the lowest lying sparticle states, a feature that we focused on in this paper. We study in detail the 1-loop RGE running of the quantity m 2 H d − m 2 Hu in section 3; we analyze the various contributions to its beta function, as well as determine the sensitivity to various deformations of boundary conditions. We find that the low mass feature for the extra MSSM Higgs particles is very sensitive to the exactness of t-b unification, with a 10 % percent deformation easily raising the scale by a factor of 2. The b-τ unification, presence of right-handed neutrinos, or a split of a universal scalar soft mass m 0 into the sfermion and Higgs parameters m 16 and m 10 , on the other hand, produce numerically a far more modest effect. Given the large sensitivity to t-b deformations, we conclude that a top-down RGE calculation is more suitable to accurately model the extra Higgs masses in exact t-b-τ unification. This effect of low extra Higgs masses is ubiquitous in the entire parameter space, at least where t-b-τ unification leads to realistic Yukawa values at low energies. Most of the parameter space, both in the CMSSM and in the SO(10) boundary condition scenario, where a good fit to the 3rd family Yukawa couplings and the SM Higgs mass can be obtained, favors the extra Higgs masses at less than 1 TeV (for the case of exact t-b-τ unification), as presented in sections 4 and 5. These model predictions, however, are in tension with ATLAS and CMS searches of ditau decays of neutral extra Higgses, i.e. H 0 /A 0 → τ τ . The experimental searches result in upper bounds on tan β as a function of m A 0 . Since t-b-τ unification requires a large tan β ≈ 50, this suggests the extra Higgses to be above roughly 1.5 TeV. In exact t-bτ unification with correct Yukawa predictions at low scales, it is hard to achieve masses above ∼ 1.3 TeV; the main obstacle turns out to simultaneously obtain heavy extra Higgses alongside a sufficiently low SM Higgs mass near 125 GeV. JHEP06(2020)014 The tension with experiment can be reduced by relaxing exact t-b-τ unification. As shown in section 5, a deformation of t-b unification at a level of a few percent can completely relieve the tension with experiment, both by raising the masses of the extra Higgs particles and lowering the required tan β. Such a deformation of a few percent could come about from GUT threshold corrections, especially given the large numbers of particles in the SO (10) representations in the Higgs sector (which are of course model dependent), or Planck scale suppressed operators. It should be noted, however, that even deformed t-b-τ unification prefers lower rather than higher extra Higgs masses. In summary, we have shown that t-b-τ (quasi-)unification in SO(10) SUSY GUTs with µ < 0 generically features comparably light extra MSSM Higgs particles. For exact t-b-τ unification we find a tension with LHC constraints from H 0 /A 0 → τ τ , due to predicting too light masses of the extra MSSM Higgses. The tension can be successfully alleviated by relaxing the scenario to quasi-unification of Yukawa couplings: a few percent split of the top Yukawa from the unified value (most importantly from the bottom Yukawa) can bring the extra Higgs states to sufficiently high values to avoid the present experimental constraints. Nevertheless, masses of these states close to the present bounds are still preferred. This implies that a continuing non-observation of the extra MSSM Higgses would require ever bigger deformation of t-b-τ unification, finally disfavoring the scenario. Conversely, an observation of an extra Higgs state in the ditau decay channel could be the first sparticle observation of the t-b-τ unified SO(10) SUSY GUT model, and measuring a sparticle spectrum with extra Higgses having the lowest masses could be a hint for the realization of this scenario in nature. JHEP06(2020)014 (A.10) JHEP06(2020)014 The loop factor c 1 is defined as For the Majorana neutrino mass associated to the large 3rd family neutrino Yukawa coupling, we assume the value M ν3 = M R at the scale M R , implying that this heavy neutrino is integrated out at the scale M R . The M ν3 does not appear in the RGE of any other quantity. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
18,205.6
2020-06-01T00:00:00.000
[ "Physics" ]
Metallurgical Model of Di ff usible Hydrogen and Non-Metallic Slag Inclusions in Underwater Wet Welding of High-Strength Steel : High susceptibility to cold cracking induced by di ff usible hydrogen and hydrogen embrittlement are major obstacles to greater utilization of underwater wet welding for high-strength steels. The aim of the research was to develop gas–slag systems for flux-cored wires that have high metallurgical activity in removal of hydrogen and hydroxyl groups. Thermodynamic modeling and experimental research confirmed that a decrease in the concentration of di ff usible hydrogen can be achieved by reducing the partial pressure of hydrogen and water vapor in the vapor–gas bubble and by increasing the hydroxyl capacity of the slag system in metallurgical reactions leading to hydrogen fluoride formation and ionic dissolution of hydroxyl groups in the basic fluorine-containing slag of a TiO 2 –CaF 2 –Na 3 AlF 6 system. Introduction Underwater wet welding is a welding technique commonly used for the construction and repair of ocean-going vessels, oil and gas platforms, and offshore wind turbines. The load-bearing structures of such vessels and installations are often made of high-strength steels [1,2], which pose considerable challenges to welders and the welding processes to obtain the welds exhibiting high levels of strength, ductility, and impact toughness [3,4]. A further issue is that underwater wet welding is susceptible to weld defects like hydrogen-assisted cold cracking, porosity, slag inclusions, and delayed hydrogen embrittlement [5][6][7][8][9][10]. Consequently, underwater wet welding has found only limited use for critical applications, which requires further study of the mechanism of the appearance of defects. The appearance of defects during underwater welding is associated with the formation of diffusible hydrogen, active oxygen, and slag in the welding zone [11]. In underwater wet welding, the welding occurs in a vapor-gas bubble [12][13][14] in which the hydrogen content reaches 85-96% [15]. When welding with coated electrodes and flux-cored rutile wire, the vapor-gas bubble comprises 93-98% H 2 , 1.5-6% CO, and 0.5-2% CO 2 [16]. Hydrogen from the vapor-gas bubble can be absorbed into the liquid weld, which is the major cause of porosity in underwater wet welding. The gas composition of pores in the weld consists of 62-82 wt.% H 2 , 11-24 wt.% CO, and 4-6 wt.% CO 2 , depending on the composition of the electrode coating and the welding parameters used [17]. Before dissolution of hydrogen atoms in the liquid weld pool, dissociation of H 2 O and H 2 occurs [18]. At the arc temperature, water (H 2 O) dissociates according to the following reaction: H 2 O = H 2 + 0.5O 2 − 260 kJ (at 6000 K) (1) Metals 2020, 10 (2) where K q is the reaction equilibrium constant, and P H2 , P O2 , and P H2O are the partial pressure (Pa) of H 2 , O 2 , and H 2 O, respectively. With further heating of the gas mixture, endothermic reactions of dissociation of the H 2 molecules and ionization of the H atoms occur. Dissolution of molecular hydrogen in the weld pool increases with growth in the partial pressure of the components of the gas mixture according to Sieverts' law. [H] = K s P H2 (5) where [H] is the hydrogen content in the weld in wt.%, P H2 is the partial pressure of molecular hydrogen in the gas phase, and K s is the solubility constant. With increases in the immersion depth, the pressure in the vapor-gas bubble increases by about 0.1 MPa for every 10 m of immersion depth, and at a depth of 50 m the total pressure reaches 0.6 MPa. The increase in pressure promotes the dissolution of hydrogen in the weld pool and thus porosity increases [19][20][21][22][23]. One mechanism for diffusible hydrogen reduction is a decrease in the hydrogen partial pressure in the vapor-gas bubble atmosphere, for example, by dissociation of carbonates and fluorides, namely Na 2 CO 3 , NaF, CaCO 3 , CaF 2 , MgCO 3 , and MgF 2 , in the flux-cored wire. To reduce porosity, carbonates CaCO 3 and MgCO 3 can be added into the electrode coatings. The carbonates dissociate in the vapor-gas bubble with the formation of CO 2 and CO, which reduces the hydrogen partial pressure above the weld pool [24]. A second mechanism for hydrogen reduction is via the chemical reaction of hydrogen and fluorine with formation of HF compounds in reactions with fluorides NaF, CaF 2 , MgF 2 , AlF 3 , etc. [25,26]. A linear decrease in the content of diffusible hydrogen [H] in the weld metal occurs with increases in the content of CaF 2 from 0 to 86 wt.%. Increasing CaF 2 is a more effective approach for reducing hydrogen than adding CaCO 3 to the electrode coating. For example, [H] content in the weld is 54 cm 3 /100 g when adding 20 wt.% CaCO 3 to the electrode coating, and when adding 20 wt.% CaF 2, the [H] content decreases to 39 cm 3 /100 g [26]. A third mechanism for hydrogen reduction is an increase of the oxidation potential of the weld pool and solubility of water vapor and OH hydroxyl groups in the liquid slag, in particular, by the addition of hematite Fe 2 O 3 with a density of 5.3 g/cm 3 [9]. Hematite Fe 2 O 3 decomposes under high-temperature conditions with the formation of wüstite FeO in the molten slag, which increases the basicity index of the slag; in addition, FeO oxidizes the weld pool, which inhibits dissolution of diffusible hydrogen in the weld pool. However, an increase in the oxidizing potential of the slag and the atmosphere of the vapor-gas bubble leads to slag nonmetallic inclusions and oxidation of alloying elements [27][28][29][30], which reduces the mechanical properties of the welds [9,31]. The slag basicity index BI is calculated as follows [32]: Metals 2020, 10, 1498 3 of 17 The hydroxyl capacity of a C OH slag system is determined according to the following equation: where H 2 O is the content of water vapor in the slag in wt.%, P H2O is the partial pressure of water vapor in the gas phase above the molten slag in the equilibrium state, and P 0 is the atmospheric pressure [33]. A ratio for the hydroxyl capacity of C OH slag has been proposed in [34,35] as follows: log C OH = 12.04 − 32.63Λ + 32.71Λ 2 − 6.62Λ 3 (8) An increase in the basicity index of the slag and hydroxyl capacity can be achieved by adding CaF 2 and cryolite Na 3 AlF 6 , which decrease the melting point, viscosity, hydrogen permeability, and density of the slag system [36][37][38][39][40][41]. The increase in slag basicity and the presence of ion Felevate the solubility of water vapor and promote the ionic binding of the hydrogen atom in OH hydroxyl groups, which leads to a decrease in the content of diffusible hydrogen [42]. The subsequent binding of hydroxyl groups OH is possible in the polymerization of AlF 6 3− and AlF 4 anions and the formation of clusters with the bonds -F-H-F-and -Al-O-Al- [43][44][45]. Simultaneous implementation of all three mechanisms for the decrease of diffusible hydrogen in underwater wet welding is possible by creating a low-density slag system based on TiO 2 -CaF 2 -Na 3 AlF 6 . It is known that water vapor dissolves in acidic and basic slags following the following ionic reactions [46][47][48]: The transition of atomic hydrogen into the weld pool and the formation of diffusible hydrogen [H] occurs according to the following ionic equations (proposed by S.G. Parshin): In the molten fluoride slags, an ionic reaction binds hydrogen with the formation of anions (OH − ) and gaseous compound HF↑. Thus, the formation of anions (OH − ), the binding of hydrogen in HF, and formation of network clusters of AlF 6 3− and AlF 4 − anions can energetically hinder the transition of atomic hydrogen into the weld pool and the formation of diffusible hydrogen [H]. The aim of the research was to develop a gas-slag system for a flux-cored wire for underwater wet welding that has high metallurgical activity and reduces diffusible hydrogen and non-metallic slag inclusions by removing hydrogen and hydroxyl in the vapor-gas bubble atmosphere and increasing the solubility of water vapor in the slag phase. Materials and Methods Samples of API X70 pipeline steel (CHTPZ, Chelyabinsk, Russia) with bainitic microstructure having dimensions of 300 mm × 200 mm × 21.3 mm was welded underwater in butt and lap joint configurations, as shown in Figure 1. Mechanized underwater wet welding was performed by divers at a depth of 12 m using a Neptun-4 submersible (Paton Institute of Electric Welding, Kiev, Ukraine). Flux-cored wires were used of type PPS-AN1 (Paton Institute of Electric Welding, Kiev, Ukraine) (TiO2-Fe2O3-MnO-iron powder composition) and PPS-APL2 (Educational Scientific and Technical Center "Svarka", St. Petersburg, Russia) (TiO2-CaF2-Na3AlF6-MnO-iron powder composition). The wires had a diameter of 1.6 mm and the rutile electrodes E7014 and UW/CS-1 (Broco, ON, USA) (TiO2-CaCO3-SiO2-Al2O3-MnO-iron powder composition) were 3.2 mm in diameter. Welding parameters are shown in Table 1. Germany) was used to analyze the microstructure, and an ERESCO 42M X-ray unit (GE Sensing and Inspection Technologies GmbH, Ahrensburg, Germany) Table 1. Mechanical tests were conducted in compliance with GOST 6996-66 using a Super L60 machine (Tinius Olsen, Horsham, PA, USA), a PH450 pendulum impact test system (Walter + Bai AG, Löhningen, Switzerland), and an EMCOTEST DuraScan-20 hardness tester (EMCO-TEST PrufmaSchinen GmbH, Kuchl, Austria). The chemical composition was determined with a Bruker Q4 TASMAN optical emission spectrometer (Bruker, Karlsruhe, Germany). A Zeiss Axiovert 200 MAT microscope (Carl Zeiss AG, Oberkochen, Germany) was used to analyze the microstructure, and an ERESCO 42M X-ray unit (GE Sensing and Inspection Technologies GmbH, Ahrensburg, Germany) was used for X-ray testing in compliance with GOST 7512-82. Research of the vapor-gas bubble formation was performed by the shadow method with a laser system and Phantom VEO 710L high-speed camera (Vision Research, Wayne, NJ, USA) with a frequency of 8000 Hz. Diffusible hydrogen content was determined by the vacuum method according to GOST 34061-2017 (ISO 3690: 2012) using an accelerated method [49] with automatic bead welding in water at the depth of 0.8 m. Thermodynamic calculations were performed using FactSage (CRCT, Montreal, Canada) and Terra (Bauman Moscow State Technical University, Moscow, Russia) and were based on thermodynamic data of individual substances [50]. Results and Discussion Underwater wet welding with a self-shielded flux-cored wire occurs in a vapor-gas bubble formed during dissociation of water. The welding arc consists of a central zone (arc column), a boundary zone around the arc column, and a molecular layer, in which water vapor dissociates. A proposed model of underwater wet welding using flux-cored wire is shown in Figure 2. using an accelerated method [49] with automatic bead welding in water at the depth of 0.8 m. Thermodynamic calculations were performed using FactSage (CRCT, Montreal, Canada) and Terra (Bauman Moscow State Technical University, Moscow, Russia) and were based on thermodynamic data of individual substances [50]. Results and Discussion Underwater wet welding with a self-shielded flux-cored wire occurs in a vapor-gas bubble formed during dissociation of water. The welding arc consists of a central zone (arc column), a boundary zone around the arc column, and a molecular layer, in which water vapor dissociates. A proposed model of underwater wet welding using flux-cored wire is shown in Figure 2. The formation of a vapor-gas bubble includes several phases: nucleation, volume expansion with the pulsations (or growth), and collapse, as shown in Figure 3. The formation of a vapor-gas bubble includes several phases: nucleation, volume expansion with the pulsations (or growth), and collapse, as shown in Figure 3. A detailed model of metallurgical processes in underwater wet welding is shown in Figure 4. In the molten slag occur the electrochemical interactions between OH − hydroxyl and AlF6 3− and AlF4 − anions with the formation of bonds -F-H-F-and -Al-O-Al-, as shown in Figure 5. A detailed model of metallurgical processes in underwater wet welding is shown in Figure 4. In the molten slag occur the electrochemical interactions between OH − hydroxyl and AlF6 3− and AlF4 − anions with the formation of bonds -F-H-F-and -Al-O-Al-, as shown in Figure 5. In the molten slag occur the electrochemical interactions between OH − hydroxyl and AlF 6 3− and AlF 4 − anions with the formation of bonds -F-H-F-and -Al-O-Al-, as shown in Figure 5. Metals 2020, 10, x FOR PEER REVIEW 7 of 17 As a result of water dissociation and ionization of molecules, the atmosphere of the vapor-gas bubble consists of a gas mixture of a complex phase composition with high metallurgical activity in reactions with the metal of the molten weld pool, as shown in Figure 6. Dissociation of water during underwater welding leads to an increase in the hydrogen partial pressure and oxidation of iron and alloying elements Mn, Si, Cr, Ni, etc. The increase in oxidation is dependent on the pressure in the gas system. Particularly active in reactions with liquid metals are the OH hydroxyl group, which is formed in the arc during dissociation of H2O, and water vapor, which is formed by alloying elements by reactions, as shown in Figure 7. Metals 2020, 10, x FOR PEER REVIEW 7 of 17 As a result of water dissociation and ionization of molecules, the atmosphere of the vapor-gas bubble consists of a gas mixture of a complex phase composition with high metallurgical activity in reactions with the metal of the molten weld pool, as shown in Figure 6. Dissociation of water during underwater welding leads to an increase in the hydrogen partial pressure and oxidation of iron and alloying elements Mn, Si, Cr, Ni, etc. The increase in oxidation is dependent on the pressure in the gas system. Particularly active in reactions with liquid metals are the OH hydroxyl group, which is formed in the arc during dissociation of H2O, and water vapor, which is formed by alloying elements by reactions, as shown in Figure 7. Me + OH = MeO + 0.5H2 (for Mn, Fe, Co) (18) 2Me + 3OH = Me2O3 + 1.5H2 (for Fe, Cr, Al) 3Fe + 4OH = Fe3O4 + 2H2 Me + 2OH = MeO2 + H2 (for Ti, Si) Me + H2O = MeO + H2 (for Mn, Fe) 2Me + 3H2O = Me2O3 + 3H2 (for Fe, Cr, Al) Me + 2H2O = MeO2 + 2H2 (for Ti, Si). Dissociation of water during underwater welding leads to an increase in the hydrogen partial pressure and oxidation of iron and alloying elements Mn, Si, Cr, Ni, etc. The increase in oxidation is dependent on the pressure in the gas system. Particularly active in reactions with liquid metals are the OH hydroxyl group, which is formed in the arc during dissociation of H 2 O, and water vapor, which is formed by alloying elements by reactions, as shown in Figure 7. Me + OH = MeO + 0.5H 2 (for Mn, Fe, Co) (18) 2Me + 3OH = Me 2 O 3 + 1.5H 2 (for Fe, Cr, Al) 3Fe Me + 2OH = MeO 2 + H 2 (for Ti, Si) Me + 2H 2 O = MeO 2 + 2H 2 (for Ti, Si). Thermodynamic modeling of phase equilibria shows that adding 20% CaF2 and 20% Na3AlF6 into the gas system at 0.1 MPa and at 0.6 MPa leads to a decrease in the partial pressure of H, H2, and the OH hydroxyl group due to the formation of HF, as shown in Figure 8. Thermodynamic modeling of phase equilibria shows that adding 20% CaF 2 and 20% Na 3 AlF 6 into the gas system at 0.1 MPa and at 0.6 MPa leads to a decrease in the partial pressure of H, H 2 , and the OH hydroxyl group due to the formation of HF, as shown in Figure 8. For example, at 3000 K at a pressure of 0.1 MPa with 20% CaF 2 added, the partial pressure of H 2 , OH, and O decreases by 9.1, 8.7, and 3.2%, respectively, and with 20% Na 3 AlF 6 added, the partial pressure of H 2 , OH, and O decreases by 15.8, 9.8, and 3.16%, respectively. At the pressure of 0.6 MPa at 3000 K, adding 20% CaF 2 reduces the partial pressure of H 2 , OH, and O by 7.8, 8.57, and 4.4%, respectively, and adding 20% Na 3 AlF 6 reduces the partial pressure of H 2 , OH, and O 2 by 15.68, 8.57, and 0.15%, respectively. Adding CaF 2 and Na 3 AlF 6 leads to the formation of HF with a partial pressure of up to 0.01 and 0.058 MPa, respectively, at a system pressure of 0.1 MPa and 0.6 MPa. For example, at 3000 K at a pressure of 0.1 MPa with 20% CaF2 added, the partial pressure of H2, OH, and O decreases by 9.1, 8.7, and 3.2%, respectively, and with 20% Na3AlF6 added, the partial pressure of H2, OH, and O decreases by 15.8, 9.8, and 3.16%, respectively. At the pressure of 0.6 MPa at 3000 K, adding 20% CaF2 reduces the partial pressure of H2, OH, and O by 7.8, 8.57, and 4.4%, respectively, and adding 20% Na3AlF6 reduces the partial pressure of H2, OH, and O2 by 15.68, 8.57, During heating, the complex fluoride Na 3 AlF 6 in the arc dissociates with the formation of NaF and AlF 3 . Evaporation and dissociation of CaF 2 and Na 3 AlF 6 leads to the formation of NaF, AlF 3 , AlF 2 , AlF, and CaF molecules, which reduce the partial pressure of H 2 in the vapor-gas bubble and react with H 2 O and H 2 according to reactions shown in Figure 9: 0.5H 2 + NaF = Na + HF 1.5H 2 + AlF 3 =Al + 3HF H + F = HF (30) 0.5H 2 + F = HF (31) and 0.15%, respectively. Adding CaF2 and Na3AlF6 leads to the formation of HF with a partial pressure of up to 0.01 and 0.058 MPa, respectively, at a system pressure of 0.1 MPa and 0.6 MPa. During heating, the complex fluoride Na3AlF6 in the arc dissociates with the formation of NaF and AlF3. Evaporation and dissociation of CaF2 and Na3AlF6 leads to the formation of NaF, AlF3, AlF2, AlF, and CaF molecules, which reduce the partial pressure of H2 in the vapor-gas bubble and react with H2O and H2 according to reactions shown in Figure 9: 1.5H2O + AlF3 =0.5Al2O3 + 3HF (26) H2O + CaF2 = CaO + 2HF (27) 0.5H2 + NaF = Na + HF 1.5H2 + AlF3 =Al + 3HF At high temperatures in the arc, metallurgical reactions occur in the gas phase between the fluorides NaF, AlF3, AlF2, and AlF and the oxide TiO2 with the formation of fluorides TiF3, TiF4, and TiF2, which are highly reactive to H2O and H2, for example, in reactions (32)- (34), as shown in Figure 9. 4TiO2 + 2Na3AlF6 = 4TiF3 + 3Na2O + Al2O3 + O2 1.5H2O + TiF3 = 0.5Ti2O3 + 3HF 1.5H2 + TiF3 = Ti + 3HF In a TiO2-Fe2O3 slag system with 10% H2, an increase in the content of the basic oxide Fe2O3 to 30% and a decrease in the acidic oxide TiO2 to 70% results in an increase in the mass fraction of H2O, especially at the melting temperature of 1700-1750 K. When adding a mixture of fluorides (CaF2 + Na3AlF6) of up to 30% into the TiO2-CaF2-Na3AlF6 slag system, a sharp decrease occurs in the mass fraction of H2O in the slag, especially at the melting temperature of 1350-1550 K. A decrease in the H2O fraction with an increase in the fluorides content can be explained by the formation of HF and TiF3, which confirms the possibility of reactions (32)- (34) in the slag phase, as shown in Figure 10. At high temperatures in the arc, metallurgical reactions occur in the gas phase between the fluorides NaF, AlF 3 , AlF 2 , and AlF and the oxide TiO 2 with the formation of fluorides TiF 3 , TiF 4 , and TiF 2 , which are highly reactive to H 2 O and H 2 , for example, in reactions (32)- (34), as shown in Figure 9. In a TiO 2 -Fe 2 O 3 slag system with 10% H 2 , an increase in the content of the basic oxide Fe 2 O 3 to 30% and a decrease in the acidic oxide TiO 2 to 70% results in an increase in the mass fraction of H 2 O, especially at the melting temperature of 1700-1750 K. When adding a mixture of fluorides (CaF 2 + Na 3 AlF 6 ) of up to 30% into the TiO 2 -CaF 2 -Na 3 AlF 6 slag system, a sharp decrease occurs in the mass fraction of H 2 O in the slag, especially at the melting temperature of 1350-1550 K. A decrease in the H 2 O fraction with an increase in the fluorides content can be explained by the formation of HF and TiF 3 , which confirms the possibility of reactions (32)- (34) in the slag phase, as shown in Figure 10. Testing of flux-cored wires with gas-slag systems TiO2-Fe2O3 and TiO2-CaF2-Na3AlF6 showed that the presence of Fe2O3 can lead to the formation of slag inclusions and penetration defects. Utilization of a TiO2-CaF2-Na3AlF6 gas-slag system provided a higher density of deposited metal and a decrease in porosity and slag inclusions, as shown in Figure 11. Testing of flux-cored wires with gas-slag systems TiO 2 -Fe 2 O 3 and TiO 2 -CaF 2 -Na 3 AlF 6 showed that the presence of Fe 2 O 3 can lead to the formation of slag inclusions and penetration defects. Utilization of a TiO 2 -CaF 2 -Na 3 AlF 6 gas-slag system provided a higher density of deposited metal and a decrease in porosity and slag inclusions, as shown in Figure 11. Testing of flux-cored wires with gas-slag systems TiO2-Fe2O3 and TiO2-CaF2-Na3AlF6 showed that the presence of Fe2O3 can lead to the formation of slag inclusions and penetration defects. Utilization of a TiO2-CaF2-Na3AlF6 gas-slag system provided a higher density of deposited metal and a decrease in porosity and slag inclusions, as shown in Figure 11. The chemical composition and mechanical properties of the welds are shown in Tables 2 and 3. Due to oxidation, the content of alloying elements, especially manganese and carbon, significantly decreased in the direction from the root to the cap weld. The lowest transition coefficient for alloying elements was observed when welding with flux-cored wire PPS-AN1, as shown in Figure 12. Due to oxidation, the content of alloying elements, especially manganese and carbon, significantly decreased in the direction from the root to the cap weld. The lowest transition coefficient for alloying elements was observed when welding with flux-cored wire PPS-AN1, as shown in Figure 12. Mechanical tests showed that welds made with the flux-cored wire PPS-APL2 have similar characteristics as regards impact toughness, ductility, and hardness as welds made with the coated electrode UW/CS-1; however, the ultimate strength of the welds is 13-15% lower, as shown in Table 3 and in Figure 13. Mechanical tests showed that welds made with the flux-cored wire PPS-APL2 have similar characteristics as regards impact toughness, ductility, and hardness as welds made with the coated electrode UW/CS-1; however, the ultimate strength of the welds is 13-15% lower, as shown in Table 3 and in Figure 13. Welds made with the flux-cored wire PPS-AN1 had poorer mechanical characteristics because of the presence of elongated slag inclusions, as shown in Figure 14. Welds made with the flux-cored wire PPS-AN1 had poorer mechanical characteristics because of the presence of elongated slag inclusions, as shown in Figure 14. Welds made with the flux-cored wire PPS-AN1 had poorer mechanical characteristics because of the presence of elongated slag inclusions, as shown in Figure 14. Welds made with the flux-cored wire PPS-AN1 had poorer mechanical characteristics because of the presence of elongated slag inclusions, as shown in Figure 14. To determine the content of diffusible hydrogen, measurements were performed by vacuum method at a pressure of 1.5 Pa for 72 h for bead welding of 100 mm × 25 mm × 8 mm samples with flux-cored wires PPS-AN1 for the TiO2-Fe2O3 system and PPS-APL2 for the TiO2-CaF2-Na3AlF6 system. Under identical welding conditions, the average content of [H] with the PPS-AN1 wire was 34.8 mL/100 g, and with the PPS-APL2 wire, 27.1 mL/100 g, i.e., a decrease of 21.1%, as shown in Figure 17. Conclusions (1) This work proposed a model of metallurgical and electrochemical processes in underwater wet welding in a vapor-gas bubble, molten slag, and liquid weld pool based on a thermodynamic modeling for the optimization of the gas-slag system and the improvement of the quality of welds. Thermodynamic modeling and experiments showed that a complex mechanism based on reducing the partial pressure of H2O, H2, H, and OH in the atmosphere of the arc and in the vapor-gas bubble and increasing the hydroxyl capacity of the basic slag system can be used to reduce the diffusible hydrogen content and slag inclusions in underwater wet welding of highstrength steel. This solution is achieved by increasing the metallurgical activity of the gas-slag system in removal of water vapor, hydrogen, and hydroxyl in reactions with the formation of To determine the content of diffusible hydrogen, measurements were performed by vacuum method at a pressure of 1.5 Pa for 72 h for bead welding of 100 mm × 25 mm × 8 mm samples with flux-cored wires PPS-AN1 for the TiO 2 -Fe 2 O 3 system and PPS-APL2 for the TiO 2 -CaF 2 -Na 3 AlF 6 system. Under identical welding conditions, the average content of [H] with the PPS-AN1 wire was 34.8 mL/100 g, and with the PPS-APL2 wire, 27.1 mL/100 g, i.e., a decrease of 21.1%, as shown in Figure 17. To determine the content of diffusible hydrogen, measurements were performed by vacuum method at a pressure of 1.5 Pa for 72 h for bead welding of 100 mm × 25 mm × 8 mm samples with flux-cored wires PPS-AN1 for the TiO2-Fe2O3 system and PPS-APL2 for the TiO2-CaF2-Na3AlF6 system. Under identical welding conditions, the average content of [H] with the PPS-AN1 wire was 34.8 mL/100 g, and with the PPS-APL2 wire, 27.1 mL/100 g, i.e., a decrease of 21.1%, as shown in Figure 17. Conclusions (1) This work proposed a model of metallurgical and electrochemical processes in underwater wet welding in a vapor-gas bubble, molten slag, and liquid weld pool based on a thermodynamic modeling for the optimization of the gas-slag system and the improvement of the quality of welds. Thermodynamic modeling and experiments showed that a complex mechanism based on reducing the partial pressure of H2O, H2, H, and OH in the atmosphere of the arc and in the vapor-gas bubble and increasing the hydroxyl capacity of the basic slag system can be used to reduce the diffusible hydrogen content and slag inclusions in underwater wet welding of highstrength steel. This solution is achieved by increasing the metallurgical activity of the gas-slag system in removal of water vapor, hydrogen, and hydroxyl in reactions with the formation of Conclusions (1) This work proposed a model of metallurgical and electrochemical processes in underwater wet welding in a vapor-gas bubble, molten slag, and liquid weld pool based on a thermodynamic modeling for the optimization of the gas-slag system and the improvement of the quality of welds. Thermodynamic modeling and experiments showed that a complex mechanism based on reducing the partial pressure of H 2 O, H 2 , H, and OH in the atmosphere of the arc and in the vapor-gas bubble and increasing the hydroxyl capacity of the basic slag system can be used to reduce the diffusible hydrogen content and slag inclusions in underwater wet welding of high-strength steel. This solution is achieved by increasing the metallurgical activity of the gas-slag system in removal of water vapor, hydrogen, and hydroxyl in reactions with the formation of HF and ionic dissolution of water vapor in the form of hydroxyl groups OH in the basic fluorine-containing slag of the TiO 2 -CaF 2 -Na 3 AlF 6 system of the flux-cored wire. (2) The oxidizing potential of the atmosphere of the arc and the vapor-gas bubble decreases with an increase in fluorides, which improves the transition coefficient of the alloying elements and the density of the deposited metal and reduces the volume of slag inclusions. As a result of using a flux-cored wire with a TiO 2 -CaF 2 -Na 3 AlF 6 system, the average strength and impact toughness of the weld increased by 8 and 22%, respectively, and the diffusible hydrogen content decreased by 21% compared to a flux-cored wire with a TiO 2 -Fe 2 O 3 system.
6,773.2
2020-11-10T00:00:00.000
[ "Materials Science" ]
Multiclass Pattern Recognition of Facial Images using Correlation Filters Pattern Recognition comes naturally to humans and there are many pattern recognition tasks which humans can perform admirably well. However, human pattern recognition cannot compete with machine speed when the number of classes to be recognized becomes tremendously large. In this paper, we analyze the effectiveness of correlation filters for pattern classification problems. We have used Distance Classifier Correlation Filter (DCCF) for pattern classification of facial images. Two essential qualities of a correlation filter are distortion tolerance and discrimination ability. DCCF transposes the feature space in such a way that the images belonging to the same class gets closer and the images from different class moves far apart; thereby increasing the distortion tolerance and the discrimination ability. The results obtained demonstrate the effectiveness of the approach for face recognition applications. Keywords—Pattern recognition; correlation filter; multiclass recognition I. INTRODUCTION There are many daily pattern recognition tasks that come naturally to humans. For example, we can recognise a close friend of ours even after a gap of many years though his features have changed a lot. We can understand a familiar voice even if it is slightly distorted. However, human pattern recognition suffers from three main drawbacks: poor speed, difficulty in scaling, and inability to handle some recognition tasks. Not surprisingly, humans cannot match machine pattern recognition tasks where good pattern recognition algorithms exist. Also, human pattern recognition has limitations when the number of classes to be recognized becomes large. Although humans have evolved to perform well on some recognition tasks such as face or voice recognition, except for a few trained experts, most humans cannot tell whose fingerprint they are looking at. Thus, there are many interesting pattern recognition tasks for which we need machines. The main goal of pattern recognition is to assign an observation maybe a signal, or an image or a high dimensional object into one of the multiple classes. An important class of pattern recognition applications is the use of biometric signatures like face image, fingerprint image, iris image etc. for person identification [1-.3] The use of two-dimensional (2-D) correlation to detect, locate, and classify targets in observed scenes has been a topic of research for a long time [4][5][6][7][8][9][10]. In this paper we analyze the possibility of using correlation filters for solving multiclass classification problems. DCCF design uses a global transformation (correlation filter) to transform the feature space to decrease the intra class distance and to increase the inter class distance. Results of the experiments conducted on benchmark dataset demonstrate the robustness of the proposed method. The rest of this paper is organized as follows. We begin with a discussion on some related work on correlation filters in Section 2. In Section 3, we discuss the salient features of DCCF and outlines the strategies adopted to apply DCCF for a multiclass facial recognition problem. Section 4 provides the experimental results and finally Section 5 concludes the paper. II. RELATED WORK Correlation filters have been widely used for several pattern recognition tasks and visual tracking of objects [11]. The advantage of using correlation filters for object tracking tasks is that it can track objects that are rotated or occluded or are with several photometric and geometric challenges. Pattern recognition of complex objects which are partially occluded are also done efficiently using multiple correlation filters [12] [13]. Composite correlation filters [14][15][16][17] gives superior performance to Matched Filters as they are designed from multiple reference images. If the reference image set are well represented to incorporate all the possible distortions that are likely to be encountered by an object, then the resulting filter will be distortion tolerant. While designing correlation filters three questions need to be considered: (1) How good is the ability of the filter to suppress clutter and noise. (2) How easy is it to detect a correlation peak. (3) How tolerant is the filter to the distortion of the object. One of the earliest composite correlation filters proposed was the Synthetic Discriminant Function (SDF) [18]. SDF uses a linear combination of reference images to create a composite image. When the designed filter is correlated with a test image, a peak will be present in the correlation plane if the test image corresponds to the TRUE class to be recognized. For all other inputs belonging to FALSE class, there will be no peak in the correlation plane. For digital pattern recognition applications, an SDF synthesized in the computer is correlated with the test image digitally, whereas for optical pattern recognition applications, the SDF designed digitally is converted to a hologram using multiple-exposure holographic techniques. Initially, design of SDF did not consider any noise and hence the filters were not noise tolerant. Minimum Variance SDF (MVSDF) [19] was one of the earliest attempts to introduce noise analysis in SDF filter design by maximizing the noise tolerance of the SDF. The original SDF design considered only the cross-correlation values at the origin. This could not ensure that the output had its peak at the origin. Since shift in the reference images resulted in shift in the peak location, there was an ambiguity in peak location when the reference image shifts were unknown. This problem was addressed in the Minimum Average Correlation Energy (MACE) filters [20]. MACE filters could produce sharp correlation peaks at the origin and were more likely to produce a correlation peak at the same location as the shifted input. However, it was realized that by exclusively focussing on the correlation-peak values, one neglects the information in the other regions of the correlation plane. In Minimumsquared-error synthetic discriminant function the averaged squared error between the resulting correlation outputs and the desired one is minimized to obtain a desired correlation plane. Maximum-average-correlation-height filter essentially uses this idea to achieve a correlation shape that yield the smallest squared error. Distance Classifier Correlation Filters (DCCF) [21] essentially incorporates the two ideas. In DCCF rather than considering just the peak the entire correlation plane was considered. Applications of initial DCCF had limitations as the approach was limited to just two classes at a time. III. MULTICLASS PATTERN RECOGNITION OF FACIAL IMAGES A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame obtained from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. DCCFs were first proposed by Mahalanobis et al. DCCF uses a correlation filter ℎ , designed from a set of training images, to classify a test image into one of a set of predefined classes. As illustrated in Fig. 1, DCCF design uses a global transformation (correlation filter) ℎ which transforms the space in such a way that same class gets closer and images belonging to different classes move apart. This improves distortion tolerances as well as discrimination ability of the correlation filter. A. Formulation of DCCF Filter Let represent the 2D Fourier transform of the i th training image of class k ordered as a vector and a diagonal matrix with as its diagonal elements. Let R k represents the mean vector of class k and R k the diagonal matrix with R k as its diagonal elements. If the transformation ℎ has to make the inter-class distance large, then the distance between the mean correlation peak values between the different classes is made as large as possible. This is formulated as the measure ℎ + ℎ where M is given as in equation (1) [16]. In the above equation R k represents the mean vector of class 'k' and is the mean of all the classes given as in equation (2). Simultaneously, the transformation ℎ makes each class compact. The compactness of a class after applying ℎ is measured by the metric average similarity measure [9] which is a metric measure of the similarity between the training images and the mean value of the class given by ℎ + ℎ where is intra-class scatter matrix given as in equation (3). It follows that the correlation filter ℎ should be so designed that it maximizes the metric. Once the correlation filter ℎ is designed using the training image set, a test image is classified by correlating with ℎ. The distance metric is calculated as the difference between two correlation peaks. The first correlation peak corresponds to the correlation of the filter with the test image. As shown in Fig. 1, this corresponds to the transformation of the test image. The second correlation peak corresponds to the correlation of the filter with the mean vector of class 'k' which corresponds to the transformation of the class 'k'. The test image is assigned to the class which gives the minimum distance. B. Classification using the DCCF Filter:Method 1 Let z be the test image. The correlation peak that corresponds to the correlation between the test image and the correlation filter is given by * . The correlation peak that corresponds to the correlation between the mean vector of class 'k' and the correlation filter is given by * . The distance metric that gives the distance between the transformed test image and transformed mean vector of class 'k' is given by. The given test image is assigned to the class that gives the minimum value of 422 | P a g e www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 5, 2020 C. Classification using the DCCF Filter:Method 2 The above equation for can also be written as follows In the above equation, a is the transformed input image and is given as = | * | 2 and = | * | 2 is the energy of the transformed k th class mean and ℎ = * is considered as the effective filter for class k. This gives us an alternate strategy for classification of a test image using 2D correlation techniques. The third and the fourth term in the expression (6) is implemented as a 2D correlation of the test image 'z' and the effective filter ℎ for the class 'k'. The term is minimized when the third and fourth term is maximized which corresponds to the peak correlation value of z with ℎ . Shown in Fig. 2 is the schematic for implementing this method using 2D correlation. The test image is correlated with effective filter of each class and the peak value which is the value at the origin of the correlation plane is used to compute the distance using expression (6). As discussed in the previous section, coherent optical processing systems can be used to implement these techniques. D. Facial Recognition using DCCF In this section we discuss the classification of 'c' different objects, in this case facial images, using the DCCF. Facial images of one person with different facial expressions form the members of one class. The training images are used to design the filter, which is used to classify a test image. The algorithms for the classification of the facial images using DCCF are given in Table I. Find the eigenvectors and Eigen values of V. Algorithm for classification of the test image (Method 1) Step 1: Formulate the square matrix H whose diagonal consist of the vector h Step 2: Calculate the distance metric of the test image vector z to the class k mean vector mk as dk=H * z -H * mk 2 for k= 1 to C Step 3: Classify the vector z to that class that gives the minimum value of distance metric. Algorithm for classification of the test image (Method 2) Step 1: Calculate hk and bk for each k= 1 to C as given by expression (6) Step 2: Perform the 2D correlation of test image 'z' with the effective filter hk for each class. Step 3: Compute the distance metric dk for each class 'k' using expression (6). Step 4: Classify the vector z to that class that gives the minimum value of distance metric. [22] which consists of 3040 facial images of 152 persons. A total of 20 images are available for training from each class. The subjects are set at fixed distance from the camera and were asked to speak, whilst a sequence of 20 images was taken. The speech was used to introduce facial expression variation. The resolution of each image was 180 x 200 pixels. The algorithm discussed in Section 2 was used to design a DCCF that could classify any image in the database to one of the 152 classes. The 20 images in each class were divided into two sets. One set of images were used to train the filter referred to as training images. The other set of images were used to test the filter referred to as test images. Representative images from the database is shown in Fig. 3. The robustness of the designed filter would depend on the size of the training image set and how well the training image set represents the possible distortions that occur. Obviously, the larger the training image set, the robust the filter will be. But usually there are a limited number of images available. Hence, if most of the images are used to train the filter, we are left with few images to test the filter. If the number of images used to train the filter is less, then the classification error increases. Hence the training and test image set must be chosen judiciously. Table II gives the results obtained from the study varying the size of the training images set and test image dataset. In each case the total number of images used (i.e.training+testing) is kept a constant. The filter is generated using the training images. The other images are used for testing. The distance of the transformed test image from the transformed mean image of each class is calculated as described in the previous chapter. The test image is classified to the class which gives the minimum value of distance metric. The total classification error for all classes as a percentage of total facial images are plotted against total number of training images as a percentage of total facial images and is shown in Fig 4. It is observed that when more than 40% images are used for training the classification error is significantly low. Fig. 5 shows the distribution of errors when 95% of the images are used for training. There are only 5 errors out of 152 images used for testing (one image from each of 152 classes) and these 5 errors fall in 5 different classes. In other words, the errors are distributed evenly. Fig. 6 shows the distribution of errors when 90% images of the image dataset is used for training. The 16 errors are distributed over 10 classes with the maximum error per class being 2. Fig. 7 shows the distribution of errors when 85% of the images are used for training. The 25 errors are distributed over 13 classes with the maximum error per class being 3. Fig. 8 shows the distribution of errors 38 errors over 14 classes when 80% of the images are used for training with maximum error per class being 4 and minimum error per class being 1. Fig. 9-15 shows the distribution of errors when the training images decreases from 75% to 45%. The classification errors correspondingly increase from 50 to 175. For the case when training images are 75% of the image dataset, the errors fall in 15 classes with maximum error per class being 5 and minimum error per class being 1. When the training images are 45%, the errors fall into 40 classes with maximum error per class being 11 and minimum error per class being 1. It is observed that for all the cases the error is uniformly distributed across all classes. Fig. 16-23 shows the distribution of errors when the training images decreases from 40% to 5%. The total errors increase from 219 to 2868. These errors fall in 46 classes and 152 classes, respectively. In the worst case when the training images are just 5%, the errors are evenly distributed in all the 152 classes with the average error per class being 19. It may be seen that as the number of training images decreases, the error increases, as expected, and these errors are evenly distributed across multiple classes. Vol. 11, No. 5, 2020 The experimental results presented shows that DCCF successfully classifies facial images of 152 persons from the Essex database. The facial images belong to persons from quite different ethnic backgrounds. There are 20 images of each person with different facial expressions. It is seen that the DCCF filter shows good discrimination ability to classify facial expressions of 152 persons while maintaining good distortion tolerance towards a variety of facial expressions present in each class. It is also seen that the classification error is below 10% when at least 40% images from the available dataset are used to train the filter. V. CONCLUSION In this paper correlation based pattern recognition is adopted for classification of facial images. A good correlation filter must have distortion tolerance and discrimination ability in equal measure. Distance Classifier Correlation Filter is designed, with a global transformation H which maximises the separation between different classes and which minimizes the spread of each class is found. Experimental results show DCCFs work perfectly well for face recognition applications. However, the training set used to develop the filter should be large and well represented. Percentage of training images must be at least 40% of the available dataset for errors to be reasonably low. We find that as the images used to train the filter decreases the increase in classification error is distributed evenly across multiple classes. Our future work would involve the study of DCCF based classification technique for various other biometrics like fingerprint, iris, etc.
4,170.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Age-velocity relations with GALEX F U V -determined ages of Sun-like, solar neighborhood stars A relationship between chromospheric activity and age is calibrated for FGK dwarf stars using GALEX FUV magnitudes and Gaia (G BP − G) colors. Such a calibration between GALEX FUV magnitudes and stellar age has utility in population studies of dwarfs for further understanding of the chemical evolution of the Milky Way. As an illustration of one such application we have investigated a population of Sun-like, solar neighborhood stars for their metallicities and velocity dispersions; a cross-matched sample of FGK type dwarf stars from Casagrande et al. (2011) with the Gaia and GALEX catalogs. Using calibrated relationships between FUV magnitudes and age, we determined a chromospheric activity indicator, Q , and stellar age, τ , for each dwarf. We constructed age-velocity (AVR) and age-metallicity (AMR) relations with empirically-determined FUV ages. Power law fits to AVR plots are consistent with heating mechanism models within the literature. We further demonstrate that perigalactic distance and eccentricity versus FUV -age plots are consistent with an “inside out” formation history model. Introduction The velocity dispersion of solar neighborhood, FGK main sequence stars has been shown to increase with age. This so-called age-velocity dispersion relation (AVR) has a long history of examination (Strömberg 1946;Spitzer and Schwarzschild 1951;Wielen 1977;Seabroke and Gilmore 2007;Soubiran et al. 2008). Constraints on this relation lend to a better understanding of the mechanisms which define the formation and evolution of the Milky Way galaxy. The velocity dispersion increase with stellar age may be the result of several factors, including how the Milky Way initially formed. Rix and Bovy (2013) reviews the study of Galactic evolution and mechanisms that may have played a role to form the current state of the AVR. One explanation posits that orbits of stars were determined at birth. Vertical gradients of age and metallicity in this case are established as a consequence of the gas settlement of the disk, and radial gradients are formed "inside out" (see e.g. Veltx et al. 2008;Robin et al. 2014;Navarro et al. 2018). The initial determination of orbits and trends in star formation (Bird et al. 2013) may have been the result of several mechanisms, including early mergers (Brook et al. 2004(Brook et al. , 2012 and accretion from satellite galaxies (Abadi et al. 2003). Alternately, the observed AVR is often argued to be the result of orbital scattering or dynamical heating of a stellar distribution after gas settled into a thin disk. As such, older stars subsequently had more time to gravitationally interact with other massive objects and become scattered into altered orbits. A number of numerical modelling studies have been made to explore the latter possibility that heating mechanisms may play a significant role in producing the observed AVR. Simulations have demonstrated how gravitational interactions can cause heating through a variety of mechanisms. In earlier studies Giant Molecular Clouds (GMCs) were presumed to be the main driver of Galactic heating Schwarzschild 1951, 1953), yet in recent years simulations have shown that multiple mechanisms contribute to the observed AVR (see, for example, Hänninen and Flynn 2002;Aumer et al. 2016). The spiral arm structure and a possible bar (Barbanis and Woltjer 1967;Aumer et al. 2016), black holes (Lacey and Ostriker 1985;Hänninen and Flynn 2002), and satellite mergers (Walker et al. 1996; Moetazedian and Just 2016;Ting and Rix 2019) may all play a role in Galactic heating. The resulting models can be tested with observations of the Solar neighborhood AVR. In this work we qualitatively investigate possible heating mechanisms that played a role in the initial galaxy formation and those involved in the evolution of the AVR. More often than not, the stellar ages utilized in observational determinations of the local AVR are based upon isochrone-determination techniques. In isochrone fitting one makes use of a comparison between an observed color magnitude diagram which contains the stars of interest and theoretical isochrones or evolutionary tracks for stellar models of varied ages. This method is useful when age-dating coevolved stars such as clusters and moving groups, but can result in uncertain ages when age-dating single stars. The Geneva-Copenhagen Survey (GCS) (Nordström et al. 2004) is a large survey well suited for measuring the AVR, and has become the standard for comparison (Holmberg et al. 2009;Casagrande et al. 2011). The GCS contains 16,682 G and F-type stars within the solar neighborhood with metallicity, rotation, age, kinematics, and Galactic orbit determinations. Kinematics for this sample used Hipparcos parallaxes, Tycho-2 proper motions, and uvbyβ photometry. Stellar ages within the CGS were determined with isochrone modeling. This well-used sample can test our understanding of the Milky Way's evolution by providing the data from which a local stellar AVR can be derived. For example, Fig. 8 of Holmberg et al. (2009), who use distances, ages, and kinematics from the GCS, shows synthetic age-velocity relations for three different disc heating scenarios (scattering, heating saturation, and a late minor merger). In recent years isochrone ages have been improved through the use of Bayesian techniques. However, in Figure 4a of Lin et al. (2018) stars older than ∼2 Gyr still have significant scatter between their improved Bayesiandetermined ages and those found elsewhere within the literature. Such scatter illustrates a need for additional types of age-dating techniques which may be applied to single stars. Then one may compare AVRs constructed with stellar ages determined from a variety of methods. Ages of FGK main sequence stars based on the time-varying behavior of stellar activity can provide one such potential alternative (Strömberg 1946;Roman 1950a,b). In Crandall et al. (2020) the stellar activity-age relationship is the basis of a calibration between stellar age and far-ultraviolet (F UV ) brightness which can be added to the toolbox of other age-dating techniques. Therein F UV magnitude observations from the Galaxy Evolution Explorer telescope (GALEX) are shown to be tracers of chromospheric activity (see e.g. Smith and Redenbaugh 2010) and hence age (Findeisen et al. 2011). Crandall et al. (2020 derived an F UV -age calibration through which ages may be determined without introducing errors associated with model-based methodologies, such as isochrone fitting. The F UV -age relationship in Crandall et al. (2020) is calibrated in a combined GALEX F UV and Johnson B and V color space. In Sect. 2 of this work we re-calibrate the relationship in a GALEX plus Gaia color space, as Gaia photometry is now available for a much larger number of stars than is (B − V ) photometry. In Sect. 3 the updated F UVage calibration is utilized to determine model-independent ages of 660 GCS stars. A stellar AVR is constructed using F UV -determined ages and unprecedentedly precise Gaia kinematics. The resultant observational AVR is fitted to a power law whose coefficients are compared with other determinations in the literature. Finally, we utilize perigalactic radii, eccentricities, and F UV -determined ages to show that the stars in our sample follow an "inside out" and "upside down" formation history pattern. Section 5 summarizes our findings. A far-ultraviolet excess correlation with stellar age Far-ultraviolet (F UV ) emission has been shown to be an indicator of chromospheric activity and hence age (Smith and Redenbaugh 2010;Findeisen et al. 2011;Smith et al. 2017) among FGK main sequence stars. Within Crandall et al. (2020), this relationship was characterized such that one may use GALEX F UV magnitudes and Johnson (B − V ) colors to estimate the age of FGK dwarf stars. The relationship takes the form log e (τ ) = log e (a) + bQ, where τ is the stellar age in Gyr, Q is an F UV -excess parameter, and a and b are linear fit parameters. The Q parameter is dependent on GALEX F UV magnitude and Johnson (B − V ) color. The fit parameters a and b are also dependent on (B − V ). However, (B − V ) colors are not always available for a stellar sample. With the recent Gaia data releases we find that Gaia colors are now available for many more Galactic FGK stars than Johnson photometry. As such, within this section we establish a new F UV -age relationship for Sun-like stars through the use of Gaia colors. Data compilation Development of our new age-calibration comes from an F UV -based analysis similar to that of Crandall et al. (2020) in which stellar age data from four catalogs were combined to produce a set of calibration stars. Each of these catalogs, Ballering et al. (2013), Isaacson and Fischer (2010), Sierchio et al. (2014), Lorenzo-Oliveira et al. (2018), contain FGK dwarf stars with solar-like luminosities, metallicities, and spectral types. The ages in these four catalogs were primarily determined by stellar activity indicators such as the chromospheric Ca II H plus K emission line index log R HK . Ballering et al. (2013) and Sierchio et al. (2014) utilized chromospheric and X-ray activity indicators supplemented with surface gravity measurements and gyrochronology, where available, to derive stellar ages. Their resulting age values were then checked against isochrone-determined estimates for consistency. Ages computed in Isaacson and Fischer (2010) were derived via log R HK and calibrations from Mamajek and Hillenbrand (2008). Finally, ages for a few stars in our sample from Lorenzo-Oliveira et al. (2018) were solely estimated from Yonsei-Yale isochrones (Yi et al. 2001;Kim et al. 2002). 1 The oldest star in our sample is 9 Gyr. Many of the stars in the catalogs have GALEX F UV and Gaia G BP and G magnitudes, information vital to the F UV -age calibration. The GALEX far-ultraviolet magnitudes were extracted from the GR6/7 data release by use of the Mikulski Archive for Space Telescopes (Conti et al. 2011). Many of the said F UV magnitudes come from images obtained as part of the GALEX All-Sky Imaging Survey. Optical Gaia magnitudes were collected from Data Release 2 (DR2) and early Data Release 3 (eDR3) (Gaia Collaboration et al. 2018). Additionally, our sample consisted of stars which have Johnson (B − V ) colors for the purpose of extracting stars with Solar-like magnitudes and colors. The Johnson colors were collected from the Hipparcos catalog. We only considered those dwarfs which fall into a solar-like color range of 0.55 ≤ (B − V ) ≤ 0.71 and have an absolute visual magnitude within ±0.5 mag of the Sun: 4.3 ≤ M V ≤ 5.3. The absolute magnitude cut is not removing stars at ages where our F UV calibration will work well (see Sect. 6). More luminous stars that have evolved further from the zero-age main sequence can have weakened chromospheric Ca II H and K emission lines (Wright 2004), which are used for age-dating within the catalogs, and so may have less reliable ages. Absolute magnitudes were determined with Gaia parallaxes. These restrictions ensure that the stars considered here have spectral types and luminosities similar to that of the Sun. After the above color and magnitude cuts were placed on the sample of stars from Isaacson and Fischer (2010) so that we did not reduce the sample further based on metallicity. There may be a concern with metallicity effects contributing to errors in an F UV -age relationship, however, Figs. 6 and 8 of Crandall et al. (2020) show no correlations between metallicity and this relationship for [Fe/H] > −0.4 dex. Constraining the F UV -age relation In a similar manner to Crandall et al. (2020) we define an F UV -excess parameter Q as where G BP is the Gaia blue magnitude and u FUV is an upper boundary to the value of (F U V − G BP ) as a function of (G BP − G). This boundary, which is shown in Fig. 2, represents a minimum chromospheric activity level against which to define an FUV excess, i.e., Q is equal to the difference between the observed (F U V − G BP ) color and the boundary value at the relevant stellar (G BP − G). Figure 2 is a two-color diagram of (F U V − G BP ) versus (G BP − G) for all FGK stars in our calibration sample with optical colors of 0.10 ≤ (G BP − G) ≤ 0.55. The few dwarfs outside of this range are very scattered in their two-color relationship and are not included in Fig. 2. A minimum chromospheric boundary, the u FUV function referred to above, Errors shown in Fig. 2 only reflect Gaia magnitude errors. GALEX F UV magnitude errors are reported in the GR6/7 data release. However, they are not significant for our uses here. For example, the cross-matched sample described in Sect. 2.1 contains 1,288 stars with F UV magnitude errors. The average magnitude error for this sample is 0.8%, where the percent error was determined by (error in mag)/mag. See Sect. 2.3 for more discussion on errors assumed in our calibration. To constrain the F UV -age relationship with Gaia colors, we plot literature-reported ages from the four samples listed in Sect. 2.1 against the Q values for each star. Following Crandall et al. (2020), the fitting function that we use is a linear equation involving the natural logarithm of the stellar age τ , which is taken to be in units of Gyr throughout this paper. The basic age-calibration equation that we empirically adopt is thus the Equation (1) given above. As shown in Fig. 2 this relationship is dependent on the stellar (G BP − G) color, and so we plot log e τ against Q in color bins. The defined eight Gaia color bins are of width 0.5 mag and range between 0 Table 1. Duplicate stars from the four samples (Lorenzo-Oliveira et al. 2018;Sierchio et al. 2014;Ballering et al. 2013;Isaacson and Fischer 2010) were treated separately in constraining the F UV -age relationship. That is, no ages were combined in the form of an average or the like. Ages were not combined because the methodologies in deriving ages within the four samples differed significantly enough, with the exception of Ballering et al. (2013) and Sierchio et al. (2014), that we could not reasonably average them. Table 1 lists the values of fitted age-calibration parameters for each color bin, namely the color range of each bin, average color per bin, number of stars in each bin (N), the maximum F UV -excess Q max to which the fit is made, the linear fit parameters a and b, Spearman's correlation coefficient (ρ), the coefficient of determination (r 2 ), and the RMS of each fit about log e τ . We found reasonably high ρ and r 2 values for each fit indicating that the F UV -age relationship is well fit with the linear function Equation (1). Extinction can have a significant effect in the UV and optical band. However, we did attempt to test for possible correlations between interstellar reddening and the F UVexcess parameter Q within this work. In Crandall et al. (2020), this effect was examined for the calibration between F UV magnitude and Q. Figure 10 of Crandall et al. (2020) shows the residuals of Q about the age-Q calibration fits versus Gaia extinction values. They do not observe any significant correlation. We have plotted stellar age, τ , against metallicity [Fe/H] for the calibration sample in Fig. 4. We do not see a major difference in age ranges for the three metallicity bins at least down to a metallicity of about one-tenth solar. The range of age for the metal-rich bin is 0.22 ≤ τ ≤ 8.17. As such, we do not anticipate metallicity to impact the validity of the F UV -age relation for the metal-rich stars. We also note that Fig. 4 shows that the most metal-rich star within the range −1.5 ≤[Fe/H]< −0.2 has an age comparable to the Sun. 2 However, low-metallicity stars with ages younger than 3 Gyr are quite rare in the calibration sample, which mostly encompasses what is generally considered, within the literature, near the upper half of the metallicity range of the Galactic thick disk. There are some relatively young ages among some of the most metal-poor stars in Fig. 4. The calibration sample does not contain many stars with [Fe/H] metallicities as low as −0.4 dex or less (see Fig. 1), i.e., as low as the Casagrande et al. (2011) survey encompasses. Thus one cannot rule out that at metallicities of < −0.5 dex the age-Q relation might become sensitive to [Fe/H], which could hinder interpreting the apparently young-Q metalpoor stars in Fig. 4. In the final step of the F UV -age calibration process we investigate how the fit parameters in Table 1 vary with Gaia (G BP − G) color. Figure 5 shows log e (a) and b parameters versus (G BP − G), where representative colors are the median (G BP − G) within each bin from Table 1 Errors shown in Fig. 5 are the root-mean sum of the squares of residuals about the determination of each log e (a) and b value. With Equations (1) -(5) one can empirically estimate the age of an FGK-type, solar-like star given a GALEX F UV magnitude and Gaia colors. Alternatively, one may interpolate within a grid of a and b values given in Table 1. Related errors Several factors could contribute to an error in the fits given in Fig. 3 and Table 1, including errors in the age determinations of the calibration stars, metallicity effects, and errors in the F UV magnitude observations. The four sources from which ages were chosen (Lorenzo-Oliveira et al. 2018;Sierchio et al. 2014;Ballering et al. 2013;Isaacson and Fischer 2010) do not have associated errors quoted. As such, we did not include errors for log e (τ ) in our analysis. Crandall et al. (2020) explored the possibility of metallicity effects in the F UV -age relation and found no clear correlations. However, they noted that this is potentially the case due to a restricted metallicity range among their calibration stars, which have near-solar abundances. The current calibration also imposes a similar restriction. The extent to which errors in GALEX GR6/7 F UV magnitudes propagate into an age uncertainty were quantified in Sect. 3.3 of Crandall et al. (2020). They concluded that for a 6.0 Gyr solar-type G dwarf there would be an error of ∼ 1.0 Gyr in an F UV -derived age, assuming an observational error in F UV magnitude of 0.05 mag. We performed a similar analysis. For a theoretical star of given age between 0-6 Gyr with a solar color of (G BP − G) = 0.33 (Casagrande and VandenBerg 2018), we calculated an associated error in the derived F UV -age using Equations (1) -(5) for assumed errors of 0.02 and 0.05 mag in the GALEX F UV magnitude. Figure 6 shows the results of these calculations. Here the green dashed line corresponds to a 1:1 exact match in give the age that would be derived for assumed GALEX F UV magnitude errors of ±0.02 and ±0.05, respectively age for no error in F UV magnitude, while the blue and orange lines give the error in derived age for assumed GALEX F UV errors of ±0.02 and ±0.05 mag respectively. From Fig. 6 we conclude that for a 6 Gyr star errors in F UV magnitude that are greater than 0.05 mag could translate to an error in derived age of more than 1 Gyr. The F UV magnitude errors, if they amount to 0.05-0.15 mag, can therefore cause large age errors beyond this age threshold. As such, like with Crandall et al. (2020), our calibration is best used for stellar ages less than 6 Gyr. The age-velocity relation Far-ultraviolet-based ages for FGK dwarf stars can be combined with measurements of their space motions, and applied to a study of correlations between age and kinematics of stars in local regions of the Galaxy. As noted in Sect. 1, the age-velocity relation (AVR) is a general trend which shows that velocity dispersion increases as a function of stellar age within the solar neighborhood. More often than not, an AVR is constructed using isochrone-determined ages. In this section we utilize the Casagrande et al. (2011) sample of thousands of Geneva-Copenhagen Survey dwarf stars to construct an AVR with empirical F UV -determined ages. The stellar sample The basis for the stellar sample which we use to construct an AVR is the Casagrande et al. (2011) collection of solar neighborhood dwarf stars. Casagrande et al. (2011) reanalyzed the Geneva-Copenhagen Survey with new effective temperatures and metallicities (Nordström et al. 2004), which were then used to estimate the ages of FGK type stars with the BASTI (Pietrinferni et al. 2004a(Pietrinferni et al. ,b, 2009) and Padova (Bertelli et al. 2008(Bertelli et al. , 2009) isochrone models. The GCS sample was comprised of 12,329 dwarf stars. We then discarded stars that do not have GALEX F UV or Gaia G and G BP magnitudes. Additionally, stars without Gaia parallax measurements were omitted, as this information is used to make an absolute magnitude cut on the sample. The F UV -age calibration in Sect. 2 is only functional for stars with absolute magnitudes 4.3 ≤ M V ≤ 5.3. As such, we did not include stars outside of this solar-analog range. Absolute visual magnitudes for each star were determined by where V is the Johnson magnitude and p is the parallax in arc sec. The F UV -age calibration is also restricted by the Johnson color range 0.55 ≤ (B − V ) ≤ 0.71 and Gaia color range 0.24 ≤ (G − G BP ) ≤ 0.39, and stars outside of these ranges were not included in the sample. The final magnitude and color-cut sample contained 660 dwarf stars. An AVR with literature-reported ages We first constructed a baseline AVR, Fig. 7, with the Casagrande et al. (2011) sample without performing any of the color, magnitude or metallicity cuts mentioned above. The ages in this figure are determined by Casagrande et al. (2011) with Padova isochrones in which a probability distribution was constructed using a Bayesian framework and a median value is the final derived age (see Appendix A of Casagrande et al. 2011). Each of the three velocity dispersions were constructed from the 3D velocity components UV W , which were also quoted in Casagrande et al. (2011) and measured in the Geneva-Copenhagen Survey (Nordström et al. 2004). To determine the velocity dispersion in a given axis we first binned the 12,329 stars by age in bin width of 0.5 Gyr from 0-10 Gyr. We then utilized the median absolute deviation (MAD) to determine dispersions σ U , σ V , and σ W . For example, where U i is a given star's U velocity andŨ is the median velocity for a given bin. Within each bin we use a median representative age,τ . The first three panels in Fig. 7 show the MAD velocity dispersions for the three kinematic components versus median isochrone age. We also constructed a final AVR represented by a dispersion which we denote s. This dispersion is a quadrature sum of the U , V , and W velocity dispersions: The s values are shown versus median isochrone age,τ , in the bottom right panel of Fig. 7. Again, this AVR represents the constructed Casagrande et al. (2011) relation using their data without color, magnitude, or metallicity cuts made to the sample. The shape of this AVR is comparable to that of Fig. 17 in Casagrande et al. (2011). Errors are not shown in Fig. 7 as UV W velocity errors are not given in the Casagrande et al. (2011) sample. Traditionally, an AVR is fit with a power law function and we have done so in each panel in Fig. 7, represented by the red curve. The fits are defined as and s = 18.66τ 0.32 where τ is the Padova isochrone-determined age in Gyr and the velocity dispersions are in units of km s −1 . The rootmean-square (RMS) residuals about the velocity dispersion fits are 1.52, 1.15, 1.00, and 1.37 km s −1 for σ U , σ V , σ W , and s, respectively. These fits are noted in Table 2. An AVR with F UV -determined ages The stellar age-dating tool developed in Sect. 2, in which we calibrated a relationship between GALEX F UV magnitudes and age, is useful because it is a purely empirical relationship. Hence, this tool uniquely compliments other age-dating techniques based on isochrones, since it is largely based upon the time dependence of stellar activity. We have taken the compilation of Casagrande et al. (2011) dwarf stars with the absolute magnitude and color cuts described in Sect. 3.1 and estimated their ages with GALEX F UV observations. Equations (1) -(5) were used to determine the F UV age, τ , for each of the 660 stars in the resulting sample. The stars were then binned by age in order to determine a MAD representative value of the velocity dispersion components. However, in this case, the sample was reduced from the original 12,329 Casagrande et al. (2011) stars to 660 after color and magnitude cuts. As such, there were significantly fewer stars older than ∼ 4 Gyr in the FUV sample. We accounted for this by using varying bin widths. Velocity dispersions σ U , σ V , σ W , and s were calculated in the same manner as the full Casagrande et al. (2011) sample. Figure 8 shows all age-velocity relations when utilizing F UV -determined ages. We note more scatter compared to Fig. 7 for stars older than ∼ 4 Gyr. Additionally, the velocity dispersion relation in the V component is flatter than the full Casagrande et al. (2011) sample. We also note that the F UV -age relationship is best for stars younger than ∼ 6 Gyr, as an estimated associated F UV error is 1 Gyr for a 6 Gyr old star. Each dispersion relation was fit with a power law function: and s = 21.28τ 0.24 , where τ is the GALEX F UV -determined age in Gyr and velocity dispersions are again in km s −1 . The RMS values about the velocity dispersion fits are 3.16, 3.12, 2.41, and 2.10 km s −1 for σ U , σ V , σ W , and s, respectively. Power law fit parameters for these AVRs are listed in Table 2. To compare the F UV -age AVR and literature-age AVR, we constructed an additional AVR which uses Casagrande et al. (2011) velocities and ages, as in Sect. 3.2, but places color and magnitude constraints on the sample. We used the color ranges previously noted in Section 3.2. These constraints reduced the sample to 1,066 stars. Figure 9 shows the AVR constructed with Casagrande et al. (2011) ages and a quadrature sum of the 3D velocity dispersions, s. As in the previous AVR plots, stars were binned and a MAD representative velocity dispersion was calculated for each bin. This AVR was fit to a power law: which is also given in Table 2. The power law parameter, β = 0.23, for this AVR is comparable to that of the F UV - for UV W were determined using the median absolute deviation. The combined velocity dispersion, s, is the quadrature sum of the UV W velocity dispersions. Each relation is fit by a power law function (red) Fig. 9 Age vs. velocity dispersion plot for the Casagrande et al. (2011) sample with magnitude and color-cuts (1,066 stars) and literaturereported ages.τ is the median age within each bin. The combined velocity dispersion, s, is the quadrature sum of the UV W velocity dispersions. The relation is fit by a power law function (red) age AVR, β = 0.24. One possibility for the similar fit is that stars within the solar-like range tend to have a flatter AVR with a β ∼ 0.23 − 0.24. Another possibility is that the reduced number of stars has resulted in a flatter AVR. AVRs with Gaia-determined velocities An additional step in constructing age-velocity relations was to determine the 3D velocities of the Casagrande et al. (2011) sample by using Gaia kinematic information. We compiled a sample of Casagrande et al. (2011) stars with RA, DEC, proper motions, radial velocities, and parallax observations from the Gaia Data Release 2, yielding data for a total of 11,350 stars. The UV W velocity components were determined by inputting the Gaia kinematics into the PyAstronomy package (Czesla et al. 2019). Figure 10 shows the UV W velocity components derived from the Gaia data plotted against velocities quoted in the Casagrande et al. (2011) data set. The red line represents an equivalent velocity between the two samples. Velocity components in all three directions are quite similar and there is little systematic difference between Casagrande et al. (2011) and Gaia velocities. The root mean square deviations for the U , V , and W components are 5.28, 4.75, and 4.11 km s −1 respectively. Fig. 11. Again, the velocity dispersions were determined by Gaia velocities and the stellar ages come from Casagrande et al. (2011) isochrone fitting. As before, the AVRs were fit with a power law function: Finally, we have constructed AVR plots using Gaiadetermined velocities and F UV -determined ages. All four velocity dispersions are shown versus stellar age in Fig. 12 along with the power law fit to each relation. Similar to the AVR in Fig. 8, this sample only includes stars with GALEX F UV magnitudes, colors within 0.55 ≤ (B − V ) ≤ 0.71 and 0.24 ≤ (G − G BP ) ≤ 0.39, and absolute magnitudes within the range 4.3 ≤ M V ≤ 5.3; a total of 598 stars. Each plot was fit with a power law function: and s = 21.48τ 0.21 (25) where τ is the F UV -determined age in Gyr. The RMS values about the velocity dispersion fits are 2.82, 3.02, 2.75, and 2.47 km s −1 for σ U , σ V , σ W , and s, respectively. The relations shown in Fig. 12 are not as clearly defined as in Fig. 11. This is likely a consequence of a lower number of Fig. 11 Age-velocity relation plots for the Casagrande et al. (2011) sample using isochrone-determined ages and Gaia-derived velocity dispersions. Velocity dispersions for the UV W components were determined using the median absolute deviation.τ is the median age within each bin. The combined velocity dispersion, s, is the quadrature sum of the UV W components. The AVR relation is fit by a power law function (red) stars in this sample as compared to that in Fig. 11. We note that there are no significant differences in power law fits when comparing the Casagrande et al. (2011) and Gaia velocities. For example, the value of β found when fitting the s-AVR with isochrone-determined ages and GCS velocities is 0.32 ± 0.02. This is comparable to β = 0.34 ± 0.02, which describes the fit of the s-AVR constructed with isochronedetermined ages and Gaia velocities. Likewise, the F UVage versus s AVR with GCS velocities is fit with β = 0.24 ± 0.03, which is comparable to β = 0.21 ± 0.03 that describes the s-AVR with F UV -determined ages and Gaia velocities. Holmberg et al. (2009) simulated AVRs using synthetic Geneva-Copenhagen Survey observations. They show in their Fig. 8 (panel a) that if only GMCs or other local heating agents contributed to heating, the AVR would continuously rise in velocity dispersion. Our AVR Fig. 8, as well as that constructed with Casagrande et al. (2011) velocities and ages (Fig. 7), show a flattening of the curve around 2-3 Gyr. Holmberg et al. (2009) further demonstrated in panels c and d of their Fig. 8, that a minor merger occurring at the 3 Gyr mark would cause such a flattening. It is quite possible that our observational AVRs are indeed showing the results of the Milky Way experiencing an early minor merger. The power law fit parameter, β, gives insight into the formation history of the Milky Way. Ting and Rix (2019) argue that many solar-neighborhood studies agree that the velocity dispersion towards the North Galactic pole, σ W , fit with a power law results in a parameter β ∼ 0.5. However, simple simulations of an AVR created solely by heating due to Giant Molecular Clouds (GMCs) have resulted in a fit parameter of β ∼ 0.25 (Hänninen and Flynn 2002;Kokubo and Ida 1992). Additionally, Spitzer and Schwarzschild (1951) who first highlighted such a relation, found their mean velocity dispersion was fit to a function with β = 1/3. Indeed, there are differences in the shapes of simulated AVRs. One heating mechanism alone may not be enough to describe velocity dispersion observations. There may very well be several mechanisms which play a role. In our work, we created AVRs with Casagrande et al. (2011) isochrone-determined ages, and found a β ∼ 1/3 (see Table 2). Our observational AVRs which were constructed with a sample of stars with solar-like colors and magnitudes were fit with a β ∼ 0.23. Fig. 12 Age-velocity relation plots for the Casagrande et al. (2011) sample F UV -determined ages and Gaia-derived velocity dispersions. Velocity dispersions for the UV W components were determined using the median absolute deviation.τ is the median age within each bin. The combined velocity dispersion, s, is the quadrature sum of the UV W velocity dispersions. The relation is fit by a power law function (red) There are many reasons as to why observational AVRs may not be consistent with those in simulations. This includes the variance in methods for age-dating stars and the constraints of local data certainly impact our interpretations of the agevelocity relation. Stellar metallicity, chromospheric activity, age, and orbit parameters Similar to the AVR, the age-metallicity relation (AMR) is often used to interpret the formation history of the Milky Way. Twarog (1980a,b) found an age-metallicity relationship (AMR) for nearby main sequence stars within the Milky Way which has since been used to test chemical evolution hypotheses. This work on the Solar neighborhood AMR has been greatly extended by the Geneva-Copenhagen Survey (Casagrande et al. 2011;Holmberg et al. 2007Holmberg et al. , 2009. The common consensus of the Milky Way's disk structure is that it consists of younger stars which reside closer to the Galactic plane and tend to be more metal-rich, while older stars are more vertically dispersed and metalpoor. Thus, as noted in the previous section, the limitations of a local sample of stars for studying the AVR of the Galactic disk also apply to the age-metallicity relation, which can also vary with position in the Galaxy. Quite often the population of stars in the Milky Way disk are split into two groups: thin and thick disk constituents. The formation histories of these two populations have received much discussion in the literature, with Bird et al. (2013) providing a detailed discussion. In one general scenario the thick disk is considered to have formed in situ when metal-poor stars maintained their orbital scale heights after formation and the surrounding gas collapsed into the Galactic plane (Veltx et al. 2008;Robin et al. 2014;Navarro et al. 2018). Thin disk stars may have then formed later in the collapsed gas disk. Alternatively, perhaps a major merger (Veltx et al. 2008), or several mergers (Brook et al. 2004), early on in the Milky Way's formation history formed the thick disk stars via accretion. As suggested in the previous section, our Galaxy's current structure may have been the result of heating mechanisms which have driven the thick disk outwards ("inside out" formation). As these thick disk stars are older, they have had more opportunities to gravitationally interact with Giant Molecular Clouds (Hänninen and Flynn 2002;Aumer et al. 2016), black holes (Lacey and Many works support the claim that within the varying scenarios of mechanisms which have driven the Milky Way's evolution, the kinematic trends were likely designated at birth (Bird et al. 2013). Regardless of the formation history, the thin and thick disk stars have often been classified by their [Fe/H] abundance, age, or a combination of the two parameters. However, there is significant, recent interest in a different classification of stellar populations based on [α/Fe] abundance ratios. High α-abundance and low-α stars then distinguish the high-and low-α disks, respectively, within the solar neighborhood (Fuhrmann 1998;Prochaska et al. 2000), and this concept has been extended to Galactic structure studies (Bovy et al. 2012;Haywood et al. 2013;Bovy et al. 2016). It is important to note that Bovy et al. (2012) found a lack of clear correlation between the low-α and high-α disks and the thin and thick disks. In addition, the distinction between low-α and high-α disks appears to be dependent upon Galactocentric radius, where the high-α population resides closer to the Galactic center and the low-α population in an annulus further out (Bovy et al. 2016;Haywood et al. 2016;Mackereth et al. 2019). Age-metallicity relation We explored a possible AMR for our sample of stars by first investigating the relationship between metallicity and the chromospheric activity indicator Q. Values of the metallicity [Fe/H] and [α/Fe] used here are from Casagrande et al. (2011), who derived a new metallicity scale for the Geneva-Copenhagen Survey sample of Solar neighborhood stars. These metallicity estimates are not accompanied with errors within the Casagrande et al. (2011) catalog. We only considered stars from the sample which have GALEX F UV observations. In addition, stars were only considered if they fell within the Gaia color range 0.27 < (G BP − G) < 0.40, as this range reveals the variance of chromospheric activity within the F UV broadband range without significant photospheric contamination. This sub-sample consisted of 4,644 stars. Activity indicator, Q, estimates were derived for each of the stars using Equations (2) and (3). there is a wide distribution of both age and activity levels. Furthermore, among stars with [FeH] < −0.5 the spread in Q is less than among the more metal-rich stars. In Fig. 13 we see that chromospherically inactive stars (those with more positive Q) have a wide metallicity range. As such, we are left with the question of whether there is a correlation between activity parameter Q and metallicity. This is explored in Fig. 14 We performed a two-sample KS test between the two bottom panels and found a p-value of 0.38. We cannot reject the null hypothesis that the distributions of the two samples are the same since the p-value is high. There is no significant difference in the distributions of Q between these two metallicity populations. We further explored the relationship between Q and metallicity by binning the stars in both [Fe/H] and [α/Fe] by 0.1 dex and finding the average Q within each bin. The results of this binning are shown in Table 3, and they reveal that there is little, if any, relationship between the mean value of Q and either [Fe/H] metallicity or [α/Fe] for this sample of stars. An age for each star in the GALEX-GCS sample was determined using Equations (1), (4), and (5). These ages are shown plotted against both metallicity [Fe/H] and [α/Fe] in the left and right panels of Fig. 15 respectively. Similar to Fig. 16 of Casagrande et al. (2011), there appears to be little to no correlation between metallicity and age in this figure. It must be noted, however, that the oldest ages plotted in Fig. 15 represent an extrapolation of our age calibration. Stellar age and orbit parameters In addition to kinematic information, Casagrande et al. (2011) also provide orbit information for the GCS stars, including the perigalactic radius (r min ), apogalactic radius (r max ), and orbit eccentricity derived thereby. Figure 16 shows each of these orbit parameters as a function of GALEX F UV -determined ages. Interestingly, Fig. 16 suggests that some stars of age > 3 Gyr may not have formed in the solar neighborhood. There is a tendency for stars of such age with high eccentricity to have small perigalactic radii. To further investigate this pattern, the age-metallicity plots of Fig. 15 have been recreated for four different populations of stars: two populations selected with r min < 6.0 kpc and r min ≥ 6.0 kpc (Fig. 17), and two populations sorted by eccentricity at e < 0.2 and e ≥ 0.2 (Fig. 18) These figures demonstrate that the population of stars with high orbit eccentricity and small perigalactic radii are not young. Very few stars with r min < 6.0 kpc formed in the last 2 billion years. In addition, the mean [Fe/H] for stars with e ≥ 0.2 is smaller than stars with smaller eccentricity. Older stars with low metallicity, large orbit eccentricity, and small perigalactic radii are consistent with a Galactic model that includes radial mixing by dynamical heating. Here, the inner parts of the Milky Way formed first, and the first-formed stars then migrated outward due to dynamical ] against F UV -determined ages for two populations of stars sorted on the basis of perigalactic distance: r min < 6.0 kpc (left panels) and r min ≥ 6.0 kpc (right panels). Age resolution is greater for stars younger than 6 Gyr heating. This formation history is commonly called the "inside out" model (Matteucci and Francois 1989). In addition to the above attributes of these old stars, they also have a large range in the U velocity component towards the Galactic center, as seen in Fig. 20, which shows that stars with r min < 7.0 kpc have a much more dispersed distribution in U than stars with r min > 7.0 kpc. This again, is consistent with a large velocity dispersion caused by dynamical heating. Radial mixing plays its role here by redistributing stars radially over time (Loebman et al. 2011). That is, older stars are more radially dispersed, and exist at higher vertical scale heights. Figure 19 shows the maximum vertical displacement above the Galactic plane versus perigalactic radii for the stars of which we found F UV ages. In the inside-out formation model stars with small perigalactic radii would also have larger maximum vertical displacements. Indeed many stars in this sample reflect this correlation and have a z max of several kpc above the plane. However, we do note that it is not fully consistent within Fig. 19. A high-resolution hydrodynamic simulation of a Milky Way-like galaxy, Bird et al. (2013) shows a similar scenario to what we observe here. The simulation within their work uses a fully cosmological environment and tracks age-cohorts of stars over the Galaxy's formation history. They do note that this method is less comparable to observations which utilize chemical tracers. However, it is a useful comparison for this work due to its method of tracking stars based on their ages. Bird et al. (2013) find that stars formed at redshift > 3 would have been scattered into kinematically hot configurations with thick scale heights and at shorter radial scale lengths. Younger stars are found at larger radii, but exist closer to the Galactic plane. Indeed, in this work we appear to observe the Milky Way structure following such an "inside out" pattern. Stars with high and low [α/Fe] In the radial migration model high-α stars which formed in the inner disk will have a steeper slope in an age-velocity dispersion relation (Schönrich and Binney 2009;Mackereth et al. 2019). It follows that a low-α population should have a flatter AVR. The Casagrande et al. (2011) sample of Geneva-Copenhagen Survey stars includes measurements of [α/Fe]. This has allowed us to split the GCS set of stars with F UVderived ages into low-α and high-α populations using a simple cut at [α/Fe]= 0.1, with low-α stars falling below this threshold and high-α stars above. Admittedly this is some- Maximum vertical displacement above the Galactic plane versus perigalactic radius for the stars of which we found F UV ages thing of an arbitrary and simple distinction, and a more appropriate designation might include a Galactocentric radius consideration (see e.g., Mackereth et al. 2019). Figure 21 shows the AVR for both the low-α and high-α samples with F UV -determined ages (τ Gyr). The AVRs were fit with a AVRs constructed using F UV -determined ages for both low and high α-element samples, designated as having [α/Fe] abundance ratios less than +0.1 dex, and greater than +0.1 dex, respectively.τ is the median age within each bin and s = 19.60τ 0.19 (27) for the high-α and low-α populations, respectively. Here s is again the quadrature sum of the three velocity dispersions, σ U , σ V , and σ W . Indeed, a flatter AVR curve is found for the lower-α population. The flatter curve indicates a younger, dynamically cooler population. This exemplifies α as a useful proxy for age when measuring the AVR. Perhaps another instructive pair of AVRs would be constructed by parsing populations by orbit eccentricity. We did attempt to fit separate AVRs to a high-eccentricity population consisting of stars with e ≥ 0.3 as well as a loweccentricity population with e < 0.3. However, the sample with high eccentricity is a significantly smaller subsample which contributes to a quite scattered and flat AVR. The scattered nature of the plot does not allow for a well-fit power law curve, and so is not presented here. Conclusion In this work we have calibrated a relationship between GALEX F UV magnitude and stellar age for FGK type stars in the main sequence phase of evolution. The calibration is similar to that given in Crandall et al. (2020), however, in this case one utilizes readily available Gaia (G − G BP ) colors instead of Johnson (B − V ) colors. As such, the current calibration has the advantage of being more accessible to large numbers of stars in the Gaia data releases. The empirical relationship described herein allows a user to estimate the age of a Sun-like star with a GALEX F UV magnitude and the Gaia (G−G BP ) color. The calibration has the greatest age resolution for stars younger than 6-7 Gyr. By utilizing the new F UV -age calibration, we have constructed relations between age and velocity dispersion for a set of 660 Geneva-Copenhagen Survey stars having GALEX F UV magnitude measurements. The AVRs have been fitted with a power law function in which velocity dispersion varies with stellar age as τ β . Values found for the powerlaw parameter β are consistent with theoretical AVRs constructed from simulations of the orbits of Galactic disk stars that evolve by Giant Molecular Cloud (GMC) heating with β ∼ 0.25. In addition, perigalactic radius and orbit eccentricity versus F UV -age plots show that our sample of stars is broadly consistent with an "inside out" model of Galactic disk formation and evolution, in which older stars are more centrally located with larger orbit eccentricities, while younger stars are found at larger radii and have smaller eccentricities. you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
11,111.8
2022-05-01T00:00:00.000
[ "Physics", "Geology" ]
Diversity and Divisions in the Swedish Disability Movement: Disability, Gender, and Social Justice A BSTRACT There is li v ely discussion in the social sciences about minority groups and their claims for social justice. Uni v ersalism v ersus difference and redistribution v ersus recognition are two important issues of debate. This paper takes a closer look at the social justice claims articulated by the Swedish disability mo v ement. It discusses how questions of representation, collecti v e identity, and needs interpretations are dealt with in a number of disability associations. One important assumption guiding our study is that the interpretations of members’ needs, how their needs can best be met, and who is to ha v e the legitimate right to communicate their needs, are questions subject to constant debate. The aim is to demonstrate some of the complexities confronting the disability mo v ement in its struggle for social justice. To be more specific, we set out to show two things: (i) how different kinds of justice claims are balanced by the in v estigated organizations; and (ii) that the demands for cultural recognition and socioeconomic redistribution are raised not only by the disability mo v ement v is-a`- v is the state, but also by groups within the disability mo v ement v is-a`- v is other groups in the mo v ement. The Swedish disability movement is struggling to change perspectives on disability policy from being (solely) a matter of providing care, to one of democracy, citizenship and human rights. It has strongly emphasized that people with impairments have the right to participate fully in society and make decisions concerning their lives. In addition, it has been pointed out that people with impairments suffer not only from socioeconomic disadvantage, but also from having the experience of not being recognized in their own right. Like the ''new social movements'' that took shape in the 1960s and 1970s, the disability movement has emphasized that causes of injustices are embedded in unquestioned norms, habits and symbols, and has called attention to the fact that a universalistic welfare state policy may serve to suppress differences and establish as a norm the experiences and interpretations of dominant social groups. Accordingly, a discourse on a cultural politics of recognition has played an important role in the movement's Correspondence: Agneta Hugemark, Department of Sociology, Uppsala University, Sweden. Email<EMAIL_ADDRESS>material resources and social positions, but also a question of possibilities for social groups to articulate and communicate their own experiences and perspectives (Barnes, Mercer & Shakespeare 1999, Stuart 1996, Thomas 2002. Hence, equality involves not only rejection of irrelevant differences, but also full recognition of legitimate and relevant ones. Setting out from this assumption, universalistic welfare policy has been criticized for disregarding and suppressing the ''difference'' of minorities and subordinated groups (e.g. people with impairments, ethnic minorities, women), and for generalizing the experiences and culture of dominant groups (e.g. non-disabled people; ethnic majorities, men). While the expressions and identities of the latter groups are constructed as normality, the particular perspectives and experiences of the subordinated groups are devalued or rendered invisible (Fraser & Honneth, 2003, Taylor 1992, Young 1990. Nancy Fraser's explicit aim is to broaden the theory of social justice to encompass both the redistribution and recognition aspects of justice. The normative core of her concept is the notion of parity of participation, i.e. that it is not compatible with justice that some individuals are denied participation in social interaction on equal terms (Fraser 1997(Fraser , 2000(Fraser , 2001(Fraser , 2003. On the one hand, the distribution of material resources must secure the independence of participants. This precludes forms and levels of material inequality and economic dependence that impede parity of participation (e.g. deprivation, exploitation, large differences in income and wealth). On the other hand, all participants must have the same possibilities to attain social esteem. This precludes institutionalized value patterns that deny some people the status of full partners in interaction, e.g. cultural domination, non-recognition and disrespect. Social justice requires instead that value patterns express equal respect for all participants and ensure equal opportunity for achieving social esteem (Fraser 2001:29). The different types of claims form two paradigms of social justice, one emphasizing the need for economic restructuring of the division of labour, property or income, and the other the need for cultural and symbolic change, such as the revaluation and preservation of difference. In addition, they approach group differences in different ways. While the redistribution paradigm assumes that it is classes or class-like collectivities that suffer from injustices and that group differences must be abolished, the recognition paradigm insists instead that the particularities of social groups must be recognized and revalued (Fraser 2003:11Á16). The different aspects of social justice may be treated as two distinct perspectives on social justice that can be applied to any social movement. Whether the dimension of redistribution or the dimension of recognition is predominant in a particular case is an empirical question (Fraser 2003:12). This perspectival dualism thus implies that redistribution and recognition do not correspond to two different substantial domains: economy or culture. They rather constitute two analytical perspectives that can be used to study each domain. Far from being mutually exclusive, claims for recognition and for redistribution tend to reinforce each other in a dialectical manner (Dahl 2004). Importantly, the normative concept of participatory parity is useful at both the intergroup and intragroup level. That is, it supplies the standard for assessing both ''the effects of institutionalized patterns of cultural value on the relative standing of minority groups vis-à-vis majorities'' and ''the internal effects of minority practices'' (Fraser 2003:40). In line with Fraser's argument, and research within disability studies (e.g. Ferguson 2003, Thomas 2002, we assume that people with impairments may suffer from both injustices rooted in culture and injustices rooted in the political economy. Thus, like the feminist movement, the disability movement must claim the redistribution of economic resources, as well as the recognition of group experiences and the revaluing of disrespected identities. Such claims stand in an uneasy relation to each other, however. Whereas claims for recognition tend to promote group differentiation, the politics of redistribution aims at abolishing the kinds of economic institutions and arrangements that support group differences (Fraser 1997(Fraser , 2003. This dilemma is designated by Fraser as the redistribution-recognition dilemma. It has been much discussed within feminism in terms of equality and difference (Echols 1983, Nicholson 1990, Minow 1991, Rhode 1990. We assume that the disability movement, like the feminist movement, to some extent faces a contradictory task when framing and articulating claims for social justice. It must address conflicts arising from needs for economic redistribution, on the one hand, and needs to be recognized as a social group with distinct experiences, on the other hand (Morris 1996). In the following sections, we will try to illuminate, step by step, how some actors in the Swedish disability movement handle the above-mentioned kinds of conflicts. Methods The present article is part of an ongoing research project that studies struggles for social justice in the disability movement. It is funded by the Swedish Council for Working Life and Social Research (FAS). In this article the term ''disability movement'' is used to mean disability organizations and thus approach the latter as carriers of the disability movement (Ahrne 1994:67Á68). The empirical data analysed here are drawn from four individual-based organizations and two meta-organizations (i.e. organizations whose members are other organizations). The sample of individual-based organizations includes The Swedish Association of the Visually Impaired (SRF, Synskadades Riksfö rbund), The Swedish National Association of the Deaf (SDR, Sveriges Dö vas Riksfö rbund), The Swedish National Association for Persons with Intellectual Disability (FUB, Fö reningen fö r Utvecklingsstö rda Barn, Ungdomar och Vuxna), and The Swedish Federation of People with Mobility Impairments (DHR, De Handikappades Riksfö rbund). 2 Several considerations have guided this selection. The organizations chosen are relatively large and old, they have strong voices in public debates on disability; and they put different amount of weight on claims for redistribution and recognition, respectively. The two meta-organizations are The Swedish Disability Federation (HSO, Handikappfö rbundens samarbetsorganisation) and Forum Á Women and Disability in Sweden (Forum Á Kvinnor och handikapp). The former was chosen because it organizes the vast majority of individual-based disability organizations in Sweden. 3 The choice of the latter organization was guided by our interest in intersections between disability and gender. It was originally established as the result of a perceived absence of a gender perspective within the disability movement. One empirical data source used in this article is written documents, such as political programmes, regulations, reports from annual meetings, reports on gender equality projects and networks. Another source is interviews with keypeople. We have conducted semi-structured interviews with 11 people who represent the six above-mentioned organizations, including presidents and people representing different division or networks within the organizations. The thematic interviews were structured around questions about the respective organization's activities, important political issues, economic resources, gender questions, rules for membership and representation, decision-making processes and cooperation with other disability organizations. The interviews lasted around 1 hour, were accurately and fully transcribed, and sent to the informants for approval. The concepts of needs interpretation, representation and collective identity have been central to our analysis. 4 When translating quotations from Swedish to English, we have tried to convey the informants' specific messages, rather than translating verbatim. Quotations have also been edited slightly to enhance readability. Balancing Redistribution and Recognition Claims How do the different disability organizations in our study balance different social justice claims? Which societal changes are perceived as necessary in order to make parity of participation for their respective members possible? As will become clear, there are substantial differences between the organizations in these respects. We mentioned above that the Swedish Disability Federation (HSO) is a meta-organization for the vast majority of individual-based disability organizations in Sweden. It puts much emphasis on equality and parity of participation. ''The goal is a society for everyone that is characterized by solidarity, equality and participation'' (www.hso.se). Respect for difference is seen to be at the very core of the organization's political work. In keeping with this, it is stressed that ''people with impairments, and their needs'' should be ''considered as important as people without impairment and their needs'' (www.hso.se). We are working together . . . based on an assumption of the equal value of all human beings and the equal rights of all human beings. The equal value of all means respect for human life, notwithstanding our differences and different abilities. The equal rights of all, means the right to fully participate together with others in societal life (www.hso.se). However, parity of participation requires more than merely respecting differences. According to The Swedish Disability Federation (HSO), redistribution of economic resources and organizational change are also of vital importance. It is asserted that the ''basis for a successful disability politics'' is the existence of powerful laws in combination with universal welfare policies (www.hso.se). Redistribution and recognition claims are also intertwined in the political programmes put forward by the individual-based disability organizations. Two of them: The Swedish Federation of People with Mobility Impairments (DHR) and the Swedish National Association for Persons with Intellectual Disability (FUB), seem to place more or less equal weight on both aspects of social justice. The ways in which the two types of social justice claims are balanced by The Swedish Association of the Visually Impaired (SRF) and The Swedish National Association of the Deaf (SDR) are, however, undoubtedly at variance (Hugemark & Roman 2005). One of them, The Swedish Association of the Visually Impaired (SRF), places a great deal of emphasis on equality and redistribution of economic resources. The sameness of people with and without visual impairments is consistently stressed. ''We are part of the public, even though we sometimes need special solutions in order to use a service or to participate on equal terms.'' A politics of redistribution is considered absolutely necessary to make parity of participation possible for people with visual impairments. It is also underlined that the improvements already made are largely dependent on a universal type of welfare system. A universal, tax funded welfare politics that is free from indiscrete and insulting means testing is the best way to meet our needs. Respect for the integrity of visually impaired people is best maintained by the means of universal measures (www.srfriks.org). The problems and remedies that are identified by The Swedish National Association of the Deaf (SDR) are markedly different. Whereas the Swedish Association of the Visually Impaired (SRF) strongly emphasizes the need for a politics of redistribution, this organization above all describes injustices in terms of cultural devaluation. It establishes that ''the greatest barrier'' for deaf people is the ignorance and lack of understanding of their ''particular culture, problems and needs''. It is also emphasized that sign language is the mother tongue of the deaf. We don't look upon ourselves as disabled, but as a linguistic minority group. 5 There are no barriers in the encounter between two signing persons. Barriers arise in the communication with the hearing society, which uses a language different from ours, a language that we cannot hear . . . (www.sdrf.se). The Swedish National Association of the Deaf (SDR) understands the deaf culture as ''the very nerve in the lives of deaf people'', and sign language is seen as a necessary precondition for the ''deaf culture as a way of life''. In keeping with this, the struggle to strengthen this language, and to give it official status as a minority language, is an issue of great importance in SDR. The above examples demonstrate significant differences between the kinds of social justice claims articulated by the organizations in this study. Whereas The Swedish Association of the Visually Impaired (SRF) places considerable weight on the need for economic change, The Swedish National Association Diversity and Divisions in the Swedish Disability Movement 31 of the Deaf (SDR) emphasizes instead the importance of cultural change, i.e. revaluation and recognition. As we have pointed out earlier the different kinds of social justice claims are far from mutually exclusive, however. For example, the prominence given to redistribution by the former organization is accompanied by an insistence on the importance of recognizing groupspecific experiences of people with visual impairments. Likewise, a redistribution of economic resources seems to be vital in order to provide, for example, the number of sign-language interpreters necessary for deaf people to participate equally in social life. Old Versus New Organizations Having discussed the ways in which disability organizations may balance social justice claims differently, we will now consider how group identities are constructed by the organizations in our study. Who are the ''we'' claiming social justice? What do ''we'' have in common? How are ''we'' differentiated from other social groups? The following discussion implies that there is no straightforward answer to these kinds of questions. The examination of the relation between The Swedish Disability Federation (HSO) and the individual-based organizations instead makes clear that different actors draw the dividing lines between groups of people differently. Besides a complex pattern of group difference, there is also a struggle going on between organizations over resources and needs interpretations. The Swedish Disability Federation (HSO) presents itself as ''the collective voice of the Swedish disability movement'' (www.hso.se). To be more precise, this ''collective voice'' refers to people with ''visible or invisible impairments'' (HSO 2002). The dichotomy between ''us'' and ''them'' is thus constructed by HSO as the difference between having and not having impairment. The representatives of the individual-based organizations that we have interviewed make additional distinctions, however. Categorizing their own respective organization as a classic and traditional kind of disability organization, they all in fact seem to draw a sharp line between ''old disability organizations'' and ''new disability organizations''. In this way, our informants stressed that the organization they represent shares a common interest only with some of the member organizations of The Swedish Disability Federation (HSO). One informant designated the old disability organizations as ''access organizations'' and the new ones as ''patient organizations''. Although all informants did not use these exact terms, we will adopt them in the following paragraphs. 6 In what respect, then, are so-called ''patient organizations'' seen as different from ''access organizations''? Here, differences in members' needs are mentioned. To make her point, the president of The Swedish Association of the Visually Impaired (SRF) uses as an example the needs of deaf-blind members of her own organization, and the needs of members of one organization that includes people with dental injuries. She does not feel the needs of the two groups are comparable. In her understanding, considerably more resources are required to meet the needs of the deaf-blind. Another common theme is that the scope of the political agenda is different in ''access organizations'' and ''patient organizations''. It is, thus, illustrative that the administrative director of The Swedish National Association for Persons with Intellectual Disability (FUB) explains that the political objective of her own organization is much broader than those of ''patient organizations'', such as the Breast Cancer Association. While the latter is considered to have a narrow political agenda, FUB is pointed out as an organization that works with ''children and adults [from] zero to 100 years of age, all degrees of intellectual impairments, in every social issue''. In this way, the distinction between being impaired and not being impaired was problematized by our informants. It was certainly seen as important, but not considered the only division separating ''us'' from ''them''. Conflicting interpretations about how needs are best met also occur in our material. Different organizations give voice to different interests in discussions about what kinds of political questions should be pursued by The Swedish Disability Federation (HSO). This tends to foster conflicts within the organization. Because of the implications for decision-making, some of our informants considered it problematic that the membership composition of HSO has come to be dominated by ''patient organizations''. This was thought to create frictions ''in spite of the fact that everybody speaks so nicely about consensus and so on''. The president of The National Association of the Deaf (SDR) explained his view on the question. The new organizations, they grow like mushrooms out of the ground, and they are patient and diagnosis organizations only. And HSO works on and discusses their issues a lot. They discuss health issues and medical issues. But we fight for different things, you know. We talk about accessibility in society / . . . /. We talk [about] culture, social life, and the labour market. We have a much wider agenda. Today, we are not satisfied with the composition of HSO or the co-operation within it. 7 We do not mean to imply that the problems described above capture the entire relation between The Swedish Disability Federation (HSO) and its member organizations. Naturally, some of the general political claims made by HSO correspond to claims being made by the studied individual-based organizations. Otherwise they would leave the organization. The point to be made is, instead, that both the heterogeneity among organizations within HSO, and the fact that some member organizations are more similar than others, is bound to create organizational tensions (Ahrne & Brunsson 2005). One issue now under debate concerns the distribution of resources across member organizations. The issue discussed is the federation's policy regarding principles for state subsidies. The informants in our study describe, in various ways, this ongoing struggle. The excerpt below, from an interview with the president of The Swedish Association of the Visually Impaired (SRF), may serve as a good illustration. As we have mentioned above, SRF has reflected on joining The Swedish Disability Federation (HSO). However, the president is critical of the present policy on state subsidies. She finds that the adopted policy, i.e. that all organizations should receive the same amount of money Diversity and Divisions in the Swedish Disability Movement 33 regardless of the surplus costs associated with some impairments, privileges ''patient organizations'' at the expense of ''access organizations''. In her view, the needs of the members of ''access organizations'' require more organizational resources, since there are not the same kinds of costs associated with arranging meetings and activities in ''patient organizations''. I mean, to take one example, when I get here I have my guide dog. That's a kind of surplus cost we have. / . . . /When I start my computer I have to have Braille, I have to have speech. / . . . /We have to have escorts, we have to get some help, and all this. I mean you can't get around that. And I mean there are different degrees of dependence due to type of impairment. / . . . /And we have said this, but twenty-two of the organizations of HSO have said no/ . . . /. And they formed a majority and went to the parliament (Riksdagen ) and essentially said that they wanted to do away with the surplus costs altogether. And share that money between everyone. And we felt that extremely unfair. The similarities among ''access organizations'' have resulted in informal cooperation around political issues of common interest. This kind of collaboration, thus, is based on a narrower collective ''we'' than the one articulated by The Swedish Disability Federation (HSO). The following quotation from the opening speech held by the president of The Swedish National Association of the Deaf (SDR) at the organization's conference in 2005 is telling in itself (www.sdrf.se/sdr/verksamhet/kongress). SDR is a member of HSO. / . . . /But unfortunately, it is difficult to find the right place within an organization with such disparate member organizations. On the one hand, it adds strength to be many [organizations] that adhere to the claims of people with impairments. One example of where we gain from HSO is that it supports us on the issue of sign language. But in our view, there are also many so-called patient organizations and single-issue organizations that take too much room. The dilemma implicitly touched upon in the above quotation may be phrased in the following way. On the one hand, organizations usually want to increase the number of members. The rationale for this is simple. Firstly, a big organization may be more successful in realizing its social-political programme. This constitutes a main motive for individual-based disability organizations to join The Swedish Disability Federation (HSO). Secondly, more members typically generate more economic resources. On the other hand, more members imply increased heterogeneity. This may in turn render the framing and articulation of political claims more difficult. The reason is, of course, that increased diversity of group identities, needs and interests may also foster internal conflicts. Consequently, heterogeneity renders coordination more difficult and hence threatens to weaken the organization's capacity to take action. The described tensions between organizations thus reflect difficulties related to size and unity that typically face meta-organizations (Ahrne & Brunsson 2005). This size and unity dilemma is not confined to meta-organizations, however. As will be discussed below it is also faced, to different degrees, by the individual-based organizations. A. Hugemark & C. Roman The Ability/Disability Divide We have suggested that the collective ''we'' constructed by The Swedish Disability Federation (HSO) is problematized by ''access organizations'' in our study. Nevertheless, the ability/disability divide also plays an important role within these organizations. Rules for membership and representation are cases in point. Representation issues concern questions such as which actors are to be granted the legitimate right to interpret members' needs, who is to have the right to represent the organization in public and who is to be entrusted with the task of handling, administrating and controlling the collective resources of the organization. The course of events at the annual meeting of The Swedish Federation of People with Mobility Impairments (DHR) in 2001 clearly suggests that such questions are potentially fraught with conflict within the organizations. At this meeting, the entire national board unexpectedly resigned. The immediate cause was that a majority of the delegates at the conference supported a proposal suggesting alteration of the rules for representation in order to enable people without impairments to hold positions on boards and committees. The resigning committee immediately initiated the establishment of a new organization, The Association for Mobility Impaired People 8 (Rö relsehindrades fö rbund). In this new organization it is established that both eligibility and the right to vote are exclusively reserved for people with mobility impairments. Rules for representation differ across the individual-based organizations in our study, as do rules for membership. We have discerned two types of membership rules. One type prescribes that members must both experience impairments and support organizational objectives; the other type requires support of organizational objectives only. The Swedish Association for the Visually Impaired (SRF) practises the former kind of rule when exclusively allowing people with a visual impairment to enter the organization. The other organizations practise the latter type of rule and are hence open for anyone who supports the objectives of the organization. We refer to these differences as ''narrow'' versus ''wide'' membership groups. Being admitted to an organization is not necessarily the same as being eligible for decision-making bodies. Two types of rules are discernible in this case also. One is based on the characteristics of the individual member, i.e. that all people holding positions must experience impairments. The other type is based on the composition of boards and committees. This rule requires that a fixed share of the people holding positions must experience impairments. In two organizations in our study, The Swedish Association of the Visually Impaired (SRF) and The Swedish National Association of the Deaf (SDR), the experience of impairment is a prerequisite for eligibility. In the case of SRF, where all members have visual impairments, everyone is eligible for boards and committees. In SDR, however, there is a wide gap between rules for membership and rules for representation. Whereas anyone who accepts the political objectives of the organization may become a member, only deaf Diversity and Divisions in the Swedish Disability Movement 35 people are allowed to hold positions on the national board. Consequently, some members are not granted the legitimate right to represent the organization. The Swedish Federation of People with Mobility Impairments (DHR) and The Swedish National Association for Persons with Intellectual Disability (FUB), both of which have wide membership groups, practise the type of rule requiring that a fixed share of the people holding positions must experience impairment. In the case of DHR, it is prescribed that a majority of members on the national board, including the president, must have mobility impairments. Regulations in FUB require that at least one person with intellectual impairments be represented on the national board. Although rules for membership and representation are different, the studied organizations nevertheless all have some rules suggesting that the presence of people with impairments on boards and committees is considered important. Feminist political scientist Anne Phillips (2003) has coined the expression ''the politics of presence'' to refer to the kind of thinking implicit in such rules, that is, the belief that the representation of people with certain characteristics is essential. Such a politics has been strongly advocated by both ethnic minority groups and the women's movement. The argument put forward is that representation of ethnic minorities or women on decisionmaking boards is important, because the experiences and identities of these social collectivities may be different from those of the ethnic majority group or men, respectively. ''The separation between 'who' and 'what' is to be represented, and the subordination of the first to the second'' are thus seen as problematic (Phillips ibid:5). In keeping with this, the rules for representation in the studied individual-based organizations imply that ''the politics of presence'' is an important concept in the disability organizations in this study. The excerpt below from an interview with the president of the Swedish Federation of People with Mobility Impairments (DHR) may provide a good illustration. A very strong ideology among us is that . . . We are the ones who have the experience of mobility impairments . . . Centring on the issue of who controls and runs the organization, the distinction between organizations for and of people with impairments is clearly related to ''the politics of presence''. According to Michael Oliver (1990:113) organizations of disabled people are ''those organizations where at least 50 percent of the management committee or controlling body must, themselves, be disabled''. He also argues that the emergence of the new disability movement in Great Britain in the late 1960s, marks a trend away from support for traditional organizations for people with impairments to organizations of people with impairments (se also Barnes, Mercer & Shakespeare 1999). This development seems to apply to Sweden as well. The course of events at the Swedish Federation of People with Mobility Impairments' (DHR) annual meeting, referred to above, suggest that questions of representation Á who is to have the right to interpret and communicate the needs of the group Á are not fixed once and for all. It may not come as a surprise that our data also suggest that they potentially foster conflict. In some organizations with wide membership groups, divisions within the organizations have at times led to more or less open conflicts. The ability/disability divide might then stimulate the construction of two different groups of members within one organization. The course of events in The Swedish National Association for Persons with Intellectual Disability (FUB) is telling in this respect. This organization was founded by the parents of children with intellectual impairments, and parents have ever since been very influential within the organization. However, in 1995, members with intellectual impairments decided to establish the section Klippan to increase their own influence within FUB. The idea was born out of dissatisfaction with things as they were, according to one of the initiators of the establishment of the new section. He recounts to us that the process began in the 1980s when ''it was decided that persons with intellectual impairments should take positions in committees, and stuff like that''. Experiences gathered at meetings were not always positive, because ''we often became hostages because the committee meetings were held on a far too high level for my buddies to keep up with''. The result was that people with intellectual impairments felt that they did not have the opportunity to communicate sufficiently their own experiences and perspectives within the organization. That's when we began to think about how to increase the influence of persons with intellectual impairments. We appointed a group consisting of, I think, three or four persons. / . . . /This was in 1995 . . . so we proposed to establish something we called Klippan. One of the main motives behind establishing the new section was to empower people with intellectual impairments and to enable members of this group to interpret and articulate their own needs and interests. The section aims at allowing people with intellectual impairments ''to be adults and to emancipate themselves from their parents'' in order to deal with problems ''on their own terms'' (www.fub.se/Klippan). Members with intellectual impairments apparently draw a line between themselves and the non-disabled members of the organization. With the establishment of Klippan, this division was institutionalized in the formal organizational structure. Moreover, the process made manifest differences in opinion about social-political issues. To take just one example, we were told that the two different groups display different views on the question of foetal diagnosis. Our informant said that the board of the national organization, which consists mainly of people without impairments, had not taken a clear enough stand on this issue. In his opinion, FUB ought to fight against foetal diagnosis ''because me and many of my friends think about it as a way to do away with people you don't want''. He believes that group differences in opinion are due to divergent experiences. Diversity and Divisions in the Swedish Disability Movement 37 [The national board] is composed mainly of parents . . . And they of course get their viewpoints from their own experiences, whereas my friends and I form our opinions . . . from our experiences, so to speak. So, you proceed from very different experiences. The history of the establishment of Klippan brings to the fore a problem, i.e. that procedures for decision-making may function as barriers to equal participation (Fraser 2003). In this case, a group of members successfully argued and acted in order to be empowered and recognized in their own right. They chose to protest to express dissatisfaction and to bring about organizational change. The group of people that resigned from the national board in order to establish a new organization, which was referred to in the beginning of this section, preferred another strategy. Instead of choosing the voice option to protest against the state of things, they chose the exit option (Hirschman 1970). Intersecting Social Divisions and Parity of Participation The examples given in the previous sections help to illustrate some of the complexities confronting disability organizations in their struggle for social justice. That is, that the heterogeneity of the movement and diversification of group identities sometimes give rise to hidden or open conflicts concerning representation issues. They also make clear that claims for recognition and redistribution are raised not only in relation to the welfare state, but that there is also an ongoing struggle over resources and needs interpretations between disability organizations and between various social groups within the movement. Thus far our analysis has mainly been confined to examining ways in which type and degree of impairment may affect group identities, needs interpretations and political claim making. We agree, however, with feminist disability researchers pointing out the need to deal with the intersection of disability with gender (e.g. Barron 1997, Morris 1996, Thomas 2002, Traustdò ttir 2004, Verner & Swain 2002. Starting out from the assumption that ''disabled women occupy different kinds of social locations to disabled men, because more than one system of oppression is in operation'' (Thomas 2002:48), the following section, consequently examines intersections between disability and gender. We may begin by establishing that the existence of informal power structures is discernible simply by looking at the organizational charts of the individual-based organizations in our study. Within the respective organizations, there are sections, networks, projects, etc. dealing specifically with particular experiences and problems facing women, young people and people born outside Sweden. That is, these groupings are founded on major social divisions in society, such as gender, age and ethnicity. The very existence of these kinds of arenas suggests the complexities and power relations involved in the interpretation and articulation of members' needs. Sociologist R. W. Connell (1987) has developed the concepts of ''gender order'' and ''gender regime'' in order to distinguish between the structural inventory of an entire society respective the structural inventory of particular institutions (ibid:98Á99). The division of labour, the structure of power and the structure of cathexis (sexual social relations) are seen as major elements at both levels. We find Connell's attempt to develop a concept for the intermediate level of social organization fruitful, and will here use the term ''gender regime'' to denote the given state of play in gender relations at various institutions. Two types of gender regimes will be discussed. On the one hand, gender regimes in the Swedish welfare state. On the other hand, gender regimes in the investigated disability organizations. Whereas the former tend to produce gendered outcomes in services, entitlements, employment, etc., the latter tend to produce and reproduce gendered organizational structures, practices, and social justice claims. Several studies have demonstrated a gendered distribution of resources among people with impairments. Compared with men, women have generally less access to money, more health problems, face more difficulties in getting access to rehabilitation and advanced technical aid, and are more likely to be unemployed (e.g. SOU 1999:21:118). Our informants generally seemed to be well aware of the gendered distribution of resources, although some appeared somewhat disinclined to discussing gender issues. Others seemed to attach great importance to these kinds of questions. For instance, they pointed to gendered interactions between public service workers and people with impairments as one possible explanation of the fact that women, to a lesser degree than men, get access to resources. That is, gendered expectations from public service workers combined with differences in women's and men's ways of expressing themselves were considered an important mechanism underlying these gendered outcomes. We were informed about a presumed tendency among the (often) female public service workers to ''feel sorry for the men'', whereas they expect women to ''manage the things they themselves have to manage'', such as a job, domestic work, small children, etc. According to one informant, they are inclined to ''forget that the woman applying for assistance/ . . . /has an impairment''. The vice president of The Swedish Disability Federation (HSO) underlined the necessity of attending to gendered outcomes ''when it comes to transportation service, when it comes to means of assistance, when it comes to social insurances, when it comes to work''. She regretfully admitted, however, that a gender perspective is often missing within her own organization, both in discussions about members' needs and in discussions about which policy to pursue. Gender regimes within disability organizations was a topic much discussed by the president of Forum Á Women and Disability (Forum Á Kvinnor och handikapp). She recounted to us that women tend to be both silenced and ignored within the disability movement, and that ''women's questions'' are not put high on the political agenda. The absence of a gender perspective together with the gendered maldistribution of resources actually formed the main motives for establishing this meta-organization in 1997 (which among other organizations includes SDR, DHR and SRF). The aims of Forum are both to increase women's influence over their respective disability organizations, and to improve the overall life situation of girls and women with impairments, including questions of bodily integrity, such as violence (www.kvinnor-handikapp.se). Some informants addressed recognition issues in terms of organizational practices that imply disrespect of women. For example, one informant communicated to us that it is ''easier to be a man than it is to be a woman''. They listen more to a man. I don't know why. A woman must somehow always prove that she knows what she's talking about. A man doesn't have to do that. If he says that ''I can do that, and that, and that'', they don't question it. But a woman must somehow prove that she can do the thing she just said. It is very odd. Another informant drew parallels to the gender order in society as a whole, when bringing up the question of organizational gender regimes. Her intention was apparently to disclose that visually impaired men and women are not participating on equal terms within the organization. Why should it be the case that people with visual impairments are treated alike, when other men and women are not? There is somehow a difference. We know that. People often say that Sweden has come a long way when it comes to gender equality. And we have, compared to many other countries. But still, I think we have a long way to go. And this includes SRF, I think. / . . . /Somehow men still have . . . the power. As implied above, however, there are, or have been, projects and/or networks within the investigated organizations, sections that concentrate specifically on gender issues. However, when considering the priority given to and resources invested in gender issues, it becomes apparent that there is substantial variation across the organizations in our study. In some of them these kinds of activities are temporary in nature, whereas they are considered to be of primary concern to others. To our knowledge, The Swedish Association of the Visually Impaired (SFR) presently puts gender issues higher on its political agenda than do the other individual-based organizations. We will henceforth mainly use examples from this organization when discussing ways in which gender may intersect with disability in social justice claims articulated by the disability movement. Gender struggles have a long history within The Swedish Association of the Visually Impaired (SFR). The present president recounted, however, that whereas gender was a vital issue roughly 50 years ago, it was eventually considered obsolete due to the growing influence of a gender-neutral discourse. We had a strong focus on gender equality work during the 1950s and 1960s, I think. At least during the 1950s. But then it was closed down and it was said that, ''well, we don't need to work toward gender equality since we work toward everybody's equality in society''. ''We don't make distinctions . . . it's important to work for everybody, we should not differentiate between people''. So it [gender equality work] was closed down. The work toward gender equality was, however, taken up again in the 1990s. Influential people within the organization felt that it had ''lagged behind the other women's movements in Sweden'' and hence was ''not in phase'', considering that ''male dominance was strong, both in chairmanships around the country, and on the national level''. I am the first female president. So it took 115 years until that became possible. There have been active women on the board, but they have been secretaries or have had other traditionally female assignments. Female members of The Swedish Association of the Visually Impaired (SFR) have insisted on political, ideological, economic and social change both inside and outside the organizational context. Moreover, people holding leading positions have called attention to male hegemony within the organization. For example, some years ago, the former president explained, in an editorial in the membership journal, that The Swedish Association of the Visually Impaired (SFR) had not taken seriously the discrimination facing women within the organization. He acknowledged that it had not been fully understood that the kinds of discriminating mechanisms active in society at large, are also active in SRF (SRF-Perspektiv, 2001). In keeping with this, the present political program explicitly states both that differences in living conditions between visually impaired men and women must be illuminated (in order to be eliminated), and that gender equality politics imply that organizational resources must be redistributed from men to women. Moreover, it emphasizes the need to acknowledge women as a social group in their own right, that is, to recognize and revalue the ''difference'' and presumed values, experiences and competences of women. In its plan for gender equality, The Swedish Association of the Visually Impaired (SFR) is described as a male-dominated organization, and the need to increase the representation of women on boards, committees, work groups, etc., is strongly underlined. Consequently, ''the politics of presence'' in disability organizations not only concerns the question of the extent to which people without impairments can represent the interests of people with impairments (see above), but also the extent to which men can rightly interpret and represent the needs respective interests of women. The presence of women in the decision-making process has indeed become a political issue within SRF. Since men held all leading positions, they could make decisions among themselves in pretty much closed rooms, and then present ready-made propositions . . . And it then becomes difficult to pursue another policy. / . . . /We felt that [even though] two-thirds of the members of SRF were women they didn't get two-thirds of the resources. And that [gender] questions were typically not focused on. And then we felt that ''something is wrong here''. One rationale behind the idea of a politics of presence is that marginalized social groups need to hold positions in decision-making boards so that their needs and interests will be articulated and eventually met. However, the existence of depreciative value patterns may render mere presence insufficient. The history behind the establishment of Klippan within The Swedish National Association for Persons with Intellectual Disability (FUB) is one example of this (see above). This problem was also brought up during discussions about gender issues. One informant reminded us that influence is Diversity and Divisions in the Swedish Disability Movement 41 not only a question of being present, but also of ''daring to say something'' and ''someone listening'' to what you have to say. The president's answer to the question how it came about that The Swedish Association of the Visually Impaired (SFR) resumed work toward gender equality in the 1990s is quite elucidating in this respect. It was also because . . . to the extent that they had a position on the board, women felt that it was not on equal terms. They didn't listen as much to women, and [they] used techniques for dominance. It was completely okay to ridicule a woman, or to not pay attention to what a woman said. To sort of conceal the weight of [what was said] . . . or to strengthen what men said. There were a lot of these kinds of techniques played out. The quotation above tells us about stories about men who, consciously or unconsciously, withhold information from women, belittle them and make them ''invisible''. Techniques such as these are well known to gender researchers as powerful mechanisms for ensuring male dominance. To avoid the kinds of situations described above, subordinated groups sometimes form discursive arenas in which they can ''invent and construct counter discourses'' that permit them to formulate ''oppositional interpretations of their identities, interests, and needs'' (Fraser 1997:81). The formation of counter publics is also used as a way for the less powerful to avoid the risk of being absorbed into a false ''we'' that reflects the more powerful (ibid.). Naturally, the purpose behind the establishment of Klippan may also be understood in these terms. That is, it aims at providing an opportunity for people with intellectual impairments to interpret their own needs and interests. The strategy to form counter publics has been practised a great deal by the wider women's movement, and is apparently also used by female members in disability organizations. One of our informants informed us about the undertaking to form a new counter public within one of the studied organizations. I don't really know what we will do. Arranging seminars, [where] women can meet to discuss their issues. Because often they don't dare to voice certain questions when men are listening. They [the men] take over. Things like that. I think it would be good to meet alone sometimes. Not to exclude men, but I mean to meet women only. To talk about things we like to talk about. Establishment of The Forum Á Women and Disability in Sweden is one example of a counter public within the disability movement. Another example is the formation of national networks such as Lina, which was initiated by women in The Swedish Association of the Visually Impaired (SFR) to ''encourage women'' and to integrate them into the ''masculine structures that characterize the organization''. This network is, in turn, a member of the European Association Daphne, which was formed to defend the interests of women with visual impairments, and in particular, the interests of women who are the victims of violence. One of the objectives of the network is to collect information on cases of violence faced by visually impaired women, and to raise awareness of the violence faced by women among the public and strategic stakeholders. Concluding Remarks In this article Nancy Fraser's conceptual framework has been used to analyze social justice claims articulated by Swedish disability organizations struggling to improve the living conditions for people with impairments. It has pointed to complexities confronting the disability movement in its struggle for social justice. The study has shown, first, that in their struggle for parity of participation these organizations insist that welfare provisions should reflect impaired people's own interpretations of their needs. It also suggests that disability organizations balance claims for cultural recognition and economic redistribution differently. While some put approximately equal weight on the two types of justice claims, others tend to accentuate the need for redistribution or the need for recognition. Secondly, our analysis has indicated that the concepts of ''redistribution'' and ''recognition'' are productive, not only in understanding differences in disability organizations' claim-making towards the welfare state, but also in understanding struggles between and within these organizations. It has suggested that struggles over needs interpretations, over representation and over economic resources go on within the disability movement: between ''old'' and ''new'' disability organizations, between members with different genders and between members with and without impairments. Our study has, thus, indicated that the need for recognition and redistribution is not a question exclusively concerning the relation between people with and without impairments. Instead, it has pointed out that questions concerning the construction of group identities (who are ''we''?), claims for social justice (what do we want?), and representation (who can legitimately speak for us?) are indeed much more complex. A third suggestion in this article has been that intersecting power structures, such as gender regimes, are at work in the disability movement. Focusing specifically on gender it has been proposed that the gendering mechanisms in society at large are also productive in organizations that make up the movement. The results from this study accordingly give support to Fraser's assertion that ''parity of participation'' may be a fruitful concept for assessing both the relative standing of minority groups compared with majorities, and practices at the intragroup level. In other words, one interpretation of the results from our study is that parity of participation within the disability movement requires changes in practices and power relations between genders. A fourth suggestion has been that power differences in access to resources, respect and influence sometimes give rise to conflicts and struggles between different groups of members. Women's claims to increase their representation on decision-making boards, i.e. for ''a politics of presence'', has provided one example of ways in which disadvantaged groups within the disability movement at times take action to remove barriers to equal participation. Attempts to increase impaired people's representation and power in disability organizations have provided another example. The formation of counter publics where groups of members exchange their own experiences and formulate group specific interests is a third example of actions that challenge existing power structures in the disability movement. Diversity and Divisions in the Swedish Disability Movement 43 1 Some researchers stress that the disability movement shares with other ''new social movements'' the features of being located at the periphery of the traditional political system; of offering a critical evaluation of society; of stressing ''post-materialist'' values; and of being concerned with issues that cross national borders (e.g. Oliver 1990:118 Á123). 2 We use the respective organization's official translation into English here. 3 With the exception of The Swedish Association of the Visually Impaired (SRF), the individual-based associations in our sample are all members of HSO. However, SRF was also a member of HSO some years ago, and the question of joining it again is presently the subject of considerable debate. 4 Our point of departure is taken in Fraser's theory, which emphasizes that discourses over needs function as a medium for the making and contesting of political claims and ''appears as a site of struggle where groups with unequal discursive and non-discursive resources compete to establish as hegemonic their respective interpretations of legitimate social needs'' (Fraser 1989:166). Hence, our emphasis on needs interpretation in no way contradicts the critique against a needs-based welfare provision, i.e. that this kind of provision may not only open way for professional domination of welfare provision, but also institutionalize discrimination. 5 Maybe somewhat paradoxically, this does not seem to prevent SDR from identifying with the disability movement. In this respect the relationship between the two seems to differ substantially from the one between Deaf people and the British disability movement. According to British disability researchers the former group does not at all identify with disability movement (Barnes et. al. 1999:204). 6 Other terms that were used are ''traditional '', ''classic'' and ''old organizations'' versus ''diagnose'', ''medicine'' and ''new organizations.. 7 When regarding the disability movement as a ''new'' social movement, we face a kind of paradox here. That is, in our study the old organizations, rather than the new ones, seem to be part of a ''new" disability movement (Oliver 1990). 8 This is our translation.
12,205.2
2007-02-19T00:00:00.000
[ "Philosophy" ]
Conserved and divergent aspects of Robo receptor signaling and regulation between Drosophila Robo1 and C. elegans SAX-3 Abstract The evolutionarily conserved Roundabout (Robo) family of axon guidance receptors control midline crossing of axons in response to the midline repellant ligand Slit in bilaterian animals including insects, nematodes, and vertebrates. Despite this strong evolutionary conservation, it is unclear whether the signaling mechanism(s) downstream of Robo receptors are similarly conserved. To directly compare midline repulsive signaling in Robo family members from different species, here we use a transgenic approach to express the Robo family receptor SAX-3 from the nematode Caenorhabditis elegans in neurons of the fruit fly, Drosophila melanogaster. We examine SAX-3’s ability to repel Drosophila axons from the Slit-expressing midline in gain of function assays, and test SAX-3’s ability to substitute for Drosophila Robo1 during fly embryonic development in genetic rescue experiments. We show that C. elegans SAX-3 is properly translated and localized to neuronal axons when expressed in the Drosophila embryonic CNS, and that SAX-3 can signal midline repulsion in Drosophila embryonic neurons, although not as efficiently as Drosophila Robo1. Using a series of Robo1/SAX-3 chimeras, we show that the SAX-3 cytoplasmic domain can signal midline repulsion to the same extent as Robo1 when combined with the Robo1 ectodomain. We show that SAX-3 is not subject to endosomal sorting by the negative regulator Commissureless (Comm) in Drosophila neurons in vivo, and that peri-membrane and ectodomain sequences are both required for Comm sorting of Drosophila Robo1. Introduction Members of the Roundabout (Robo) family of axon guidance receptors were identified in genetic screens in Drosophila and Caenorhabditis elegans by virtue of their mutant phenotypes, wherein subsets of axons exhibit guidance errors in homozygous mutant animals (Seeger et al. 1993;Zallen et al. 1999). The family is named after the Drosophila roundabout (robo) gene (later renamed robo1) and reflects that fact that axons in the embryonic CNS of Drosophila robo1 mutants cross and re-cross the midline, forming aberrant circular pathways that resemble traffic roundabouts (Seeger et al. 1993). A screen for mutants exhibiting sensory axon (sax) defects in C. elegans identified mutations in a homologous gene, sax-3/robo, which also regulates midline crossing of axons in addition to its role in sensory axon guidance (Zallen et al. 1998(Zallen et al. ,1999. While sax-3 is the only robo family gene in C. elegans, two additional robo genes were later identified in Drosophila: robo2 and robo3 (Rajagopalan et al. 2000a, b;Simpson et al. 2000a, b). The three Robo receptors in Drosophila have distinct and in some cases overlapping axon guidance activities, but Robo1 is the main receptor for canonical Slit-dependent midline repulsion (Rajagopalan et al. 2000a;Simpson et al. 2000b). Analyses of the Robo1 and SAX-3 protein sequences showed that these genes encode transmembrane receptor proteins with an evolutionarily conserved ectodomain structure including five immunoglobulin-like (Ig) domains and three fibronectin type III (Fn) repeats (Kidd et al. 1998a;Zallen et al. 1998). While there is little sequence similarity in the receptors' cytoplasmic domains, three short conserved cytoplasmic (CC1, CC2, and CC3) motifs were identified by virtue of their evolutionary conservation among Robo family members in flies, worms, and mammals (Kidd et al. 1998a). Later studies identified a fourth CC motif (named CC0) that was present in fly and human Robo proteins (Bashaw et al. 2000), but it was not clear whether CC0 was also present in the cytoplasmic domain of C. elegans SAX-3 (Simpson et al. 2000b;Dickson and Gilestro 2006). Upon Slit binding, Robo receptors activate a cytoplasmic signaling pathway that induces collapse of the local actin cytoskeleton, resulting in growth cone repulsion. In Drosophila, midline repulsive signaling by Robo1 involves recruitment of downstream effectors that interact with the CC2 and CC3 motifs (Bashaw et al. 2000;Fan et al. 2003;Lundströ m et al. 2004;Hu et al. 2005;Yang and Bashaw 2006), as well as receptor proteolysis by the ADAM-10 metalloprotease Kuzbanian (Kuz) and clathrindependent endocytosis of proteolytically processed Robo1 (Coleman et al. 2010;Chance and Bashaw 2015). Phosphorylation of conserved tyrosine residues by the Abl tyrosine kinase (particularly within the CC1 motif) is important for negative regulation of Robo1-dependent signaling (Bashaw et al. 2000). Drosophila Robo1 is also subject to negative regulation by the endosomal sorting receptor Commissureless (Comm) (Tear et al. 1996;Kidd et al. 1998b;Keleman et al. 2002Keleman et al. , 2005 and a second Robo family member, Robo2, which interacts with Robo1 in trans to inhibit premature Slit response in precrossing commissural axons (Simpson et al. 2000b;Spitzweck et al. 2010;Evans et al. 2015). As Comm and Robo2 appear to be conserved only within a subset of insect species, it is not yet clear how well conserved the signaling and regulatory mechanisms of Drosophila Robo1 are, although the mammalian Nedd4-family interacting proteins Ndfip1 and Ndfip2 can act analogously to Drosophila Comm to regulate surface levels of Robo1 on precrossing commissural axons in the spinal cord (Gorla et al. 2019) and the divergent Robo3/Rig-1 receptor in mammals appears to be able to antagonize Slit-Robo repulsion via a mechanism that is distinct from Drosophila Robo2 (Sabatier et al. 2004;Jaworski et al. 2010;Zelina et al. 2014). The mammalian PRRG4 protein can also regulate subcellular distribution of mammalian Robo1, similar to Comm's effect on Drosophila Robo1, though whether PRRG4 influences midline crossing in mammalian neurons is not yet known (Justice et al. 2017). Trans-species ligand-receptor binding experiments suggested that the mechanism of Slit-Robo interaction is highly conserved, as insect Slit can bind to mammalian Robos and vice-versa (Brose et al. 1999). Subsequent co-crystallization and biophysical studies of Slit-Robo interaction in insect and vertebrate Robos confirmed that the Slit-Robo interface between the Slit D2 domain and the Robo Ig1 domain were highly similar across species (Morlot et al. 2007;Fukuhara et al. 2008). Gain of function and genetic rescue experiments in Drosophila showed that Robo receptors from the flour beetle Tribolium castaneum can activate midline repulsion in fly neurons, even in the absence of Drosophila robo1 (Evans and Bashaw 2012), while expression of human Robo1 (hRobo1) instead interfered with midline repulsion and produced dominant negative-like phenotypes in the fly embryonic CNS, suggesting that it is unable to signal repulsion in fly neurons (Justice et al. 2017). Here, we examine the ability of SAX-3, the single C. elegans Robo ortholog, to signal midline repulsion in Drosophila embryonic neurons using gain of function and genetic rescue assays. We show that C. elegans SAX-3 can repel Drosophila axons from the Slit-expressing embryonic midline when misexpressed broadly or in specific subsets of commissural neurons, and can partially rescue ectopic midline crossing defects in robo1 mutant embryos when expressed in robo1's normal expression pattern. Using a panel of Robo1/SAX-3 chimeric receptors, we show that the cytoplasmic domain of SAX-3 can signal midline repulsion as effectively as the cytodomain of Robo1 when paired with the Robo1 ectodomain, indicating that the SAX-3 cytodomain can interact with and activate downstream repulsive signaling components in Drosophila neurons. We show that SAX-3 is not sensitive to endosomal sorting by the negative regulator Comm, which depends on sequences within the peri-membrane and ectodomain regions of Robo1, but SAX-3 is sensitive to sortingindependent Comm inhibition. Together, our results provide evidence that repulsive signaling mechanisms of Robo family receptors are conserved outside of insects, and provide further insight into the structural basis for negative regulation of midline repulsive signaling by Drosophila Robo1. Molecular biology pUAST cloning: The sax-3 coding sequence was amplified via PCR from a sax-3 cDNA template and cloned as an XbaI-KpnI fragment into a pUAST vector (p10UASTattB) including 10xUAS and an attB site for phiC31-directed site-specific integration. Robo1 and SAX-3 p10UASTattB constructs include identical heterologous 5 0 UTR and signal sequences (derived from the Drosophila wingless gene) and an N-terminal 3xHA tag. robo1 rescue construct cloning: Construction of the robo1 genomic rescue construct was described previously (Brown et al. 2015). Full-length robo1 and sax-3 coding sequences were cloned as BglII fragments into the BamHI-digested backbone. Receptor proteins produced from this construct include the endogenous Robo1 signal peptide, and the 4xHA tag is inserted directly upstream of the Ig1 domain. Robo1/SAX-3 chimeric receptors: robo1 and sax-3 receptor fragments were amplified separately via PCR, then assembled and cloned into the robo1 rescue construct backbone using Gibson Assembly (New England Biolabs E2611). All coding regions were completely sequenced to ensure no other mutations were introduced. Robo1/SAX-3 variants include the following amino acid residues after the N-terminal 4xHA tag, relative to Genbank reference sequences AAF46887.1 (Robo1) and AAC38848.1 (SAX-3): Results Sequence comparison of C. elegans SAX-3 and Drosophila Robo1 We compared the full-length protein sequences of Drosophila Robo1 and C. elegans SAX-3 to measure the degree of sequence similarity among the eight conserved ectodomain structural elements (5 Ig þ 3 Fn) as well as the receptors' cytoplasmic domains ( Figure 1) (Kidd et al. 1998a;Zallen et al. 1998). The amino acid sequences of the eight ectodomain elements display varying degrees of evolutionary conservation, with 29%-47% sequence identity between individual domains and the highest degree of sequence identity within Ig1 (46%), Ig3 (45%), and Ig5 (47%) ( Figure 1A). Notably, the degree of sequence conservation within the Slit-binding Ig1 domain appears to be lower between Drosophila Robo1 and C. elegans SAX-3 (46% identical) than between Drosophila Robo1 and human Robo1 (58% identical) (Kidd et al. 1998a). Although there is much less sequence conservation in the receptors' cytoplasmic domains, a few areas of strong evolutionary conservation are apparent, including the previously identified CC1, CC2, and CC3 motifs ( Figure 1B). As previously described, the order of motifs is not the same between the two species, with CC3 located upstream of CC2 in SAX-3 but downstream of CC2 in Drosophila Robo1 (Kidd et al. 1998a). In addition, a short sequence region upstream of CC1 in C. elegans SAX-3 (YHYAQL) includes a tyrosine, alanine, and hydrophobic valine/leucine combination (YAxU) that is conserved in the CC0 motif in all three Drosophila Robos (Rajagopalan et al. 2000b;Simpson et al. 2000b;Dickson and Gilestro 2006), leading us to designate this sequence as SAX-3 CC0 ( Figure 1B). Finally, we note that in addition to the conserved tyrosine residues within CC0 and CC1, a third tyrosine position conserved between Drosophila Robo1 (HSPYSDA) and human Robo1 (PVQYNIV) and shown to be an Abl phosphorylation target in vitro (Bashaw et al. 2000) also appears to be present in the SAX-3 cytodomain (PARYADH), closely adjacent to the CC3 motif ( Figure 1B). We therefore conclude that all four CC motifs (CC0-CC3) and three known phosphorylation sites are present in the cytoplasmic domain of C. elegans SAX-3. This conservation Figure 1 Sequence comparison of D. melanogaster Robo1 and C. elegans SAX-3. (A) Schematic comparison of the two receptors. Each exhibits the conserved Robo family ectodomain structure including five immunoglobulin-like (Ig) domains and three fibronectin type III (Fn) repeats, and four conserved cytoplasmic (CC) motifs. Numbers indicate percent amino acid identity between the two proteins for each domain. Brackets indicate the extent of ectodomain (E), peri-membrane region (P), and cytoplasmic domain (C). Peri-membrane region is also highlighted in gray. (B) Protein sequence alignment. Structural features are indicated below the sequence. Fn domains have been re-annotated relative to Kidd et al. (1998a) based on revised predictions of beta strand locations. Identical residues are shaded black; similar residues are shaded gray. Ig, immunoglobulin-like domain; Fn, fibronectin type III repeat; Tm, transmembrane helix; CC, conserved cytoplasmic motif. Peri-membrane region (Gilestro 2008) is boxed. Although all four CC motifs (CC0-CC3) are present in both proteins, CC3 is located upstream of CC2 in SAX-3 and thus does not align with Robo1 CC3 (Kidd et al. 1998a). Asterisks indicate conserved cytoplasmic tyrosine residues that are phosphorylated by Abl tyrosine kinase in human Robo1 (Bashaw et al.2000). suggests that the signaling mechanism(s) downstream of Robo1 and SAX-3 might also be evolutionarily conserved, or at least that SAX-3 might be able to activate downstream components in fly neurons to influence axon guidance and/or signal midline repulsion. C. elegans SAX-3 can inhibit midline crossing of Drosophila axons in vivo To test whether C. elegans SAX-3 can regulate axon guidance of Drosophila neurons, we first examined the effects of misexpressing SAX-3 broadly in all Drosophila neurons using the GAL4/UAS system (Brand and Perrimon 1993). We created a transgenic line of flies carrying a GAL4-responsive UAS-SAX3 transgene and crossed these flies to a second line carrying a GAL4 transgene expressed in all neurons (elav-GAL4). We collected embryos carrying both transgenes (elav-GAL4/UAS-SAX3) and examined expression of the transgenic SAX-3 protein using an antibody against an N-terminal HA epitope tag. We examined the effect of SAX-3 misexpression on axon guidance using antibodies which detect all axons in the embryonic CNS (anti-HRP) and a subset of longitudinal axon fascicles (anti-FasII). To compare SAX-3's activity with that of Drosophila Robo1, we performed the same assay using a UAS-Robo1 transgene (Evans and Bashaw 2012;Brown et al. 2015). In elav-GAL4/UAS-SAX3 and elav-GAL4/UAS-Robo1 embryos, transgenic SAX-3 or Robo1 proteins were expressed at similar levels, and both were properly localized to axons in the embryonic ventral nerve cord (Figure 2, D and E). In both misexpression backgrounds, commissural axon tracts were thin or absent in many segments, reflecting ectopic midline repulsion and consistent with our previous analyses of Robo1 misexpression (Figure 2, B and C) (Brown et al. 2015). We also observed ectopic midline crossing in some segments in elav-GAL4/UAS-SAX3 embryos, indicating that SAX-3 misexpression can both promote and inhibit midline crossing when expressed broadly in Drosophila embryonic neurons ( Figure 2C). Consistent with this, while transgenic Robo1 protein was only detectable on nonmidline-crossing axons, we observed SAX-3 protein at equivalent levels on both midline-crossing and noncrossing axons ( Figure 2E). These results demonstrate that SAX-3 can be properly expressed and localized to axons in Drosophila embryonic neurons, and is capable of influencing midline crossing when expressed broadly in all neurons. To more closely examine the ability of SAX-3 to repel axons from the midline in the Drosophila embryonic CNS, we used eg-GAL4, a more restricted GAL4 line that is expressed in two distinct subsets of commissural neurons (the EG and EW neurons). In this experiment we also included a UAS-TauMycGFP (UAS-TMG) transgene to label the EG and EW cell bodies and axons with GFP. While EW axons cross the midline in 100% of segments in eg-GAL4, UAS-TMG control embryos, Robo1 or SAX-3 misexpression was capable of preventing EW axon crossing. We found that misexpression of SAX-3 with eg-GAL4 prevented midline crossing of EW axons in 42.5% of segments (n ¼ 118 segments in 15 embryos), compared to equivalent expression of Robo1 which prevented EW axons from crossing in 96.9% of segments (n ¼ 96 segments in 12 embryos) (Figure 2, F-J). We therefore conclude that C. elegans SAX-3 can activate midline repulsive signaling in Drosophila embryonic neurons and can prevent midline crossing when expressed broadly or in a restricted subset of commissural neurons.J Pan-neural expression of SAX-3 is unable to rescue midline crossing in robo1 mutants The gain-of-function experiments described above demonstrate that SAX-3 can induce ectopic midline repulsion when expressed in Drosophila neurons. Importantly, these experiments were carried out in embryos expressing normal levels of endogenous Robo1. To determine whether SAX-3 can promote midline repulsion in embryos lacking robo1, we performed a rescue assay using our UAS-Robo1 and UAS-SAX3 transgenes in robo1 null mutants (Brown et al. 2015). In robo1 1 homozygous mutant embryos, FasIIpositive axons cross the midline in 100% of segments; these axons do not cross the midline in wild-type embryos (Figure 2, K and L). Forcing high levels of Robo1 expression in all neurons in robo1 mutants restores midline repulsion and rescues the ectopic midline crossing phenotype, although expressing Robo1 at such high levels in robo1 mutants also induces additional defects including ectopic midline repulsion and disorganization of longitudinal axon pathways ( Figure 2M) (Brown et al. 2015). When we forced high-level SAX-3 expression in all neurons in robo1 mutant embryos (robo1 1 /robo1 1 ; elav-GAL4/UAS-SAX3), we observed a number of severe defects that were distinct from the stereotypical midline crossing defects seen in robo1 mutants (Figure 2, N, O-R). In many segments, all of the longitudinal axons collapsed together and crossed the midline together, resulting in longitudinal breaks or gaps in the axon scaffold. Some segments lacked commissures completely, while in others a single thick commissural bundle was present. In most segments axons did not linger at the midline or re-cross within the same segment, suggesting that some level of midline repulsion is intact in these embryos. However, the additional defects caused by high levels of SAX-3 misexpression in robo1 mutant embryos prevented us from accurately measuring the extent to which SAX-3 could substitute for Robo1 to signal midline repulsion (if any). Expression of SAX-3 in Drosophila embryonic neurons via a robo1 rescue transgene To more accurately compare the midline repulsive activity of SAX-3 and Robo1 in Drosophila neurons, we next used a robo1 rescue transgene that includes regulatory sequences from the Drosophila robo1 gene to express sax-3 in a pattern and expression level that closely reproduces the endogenous expression of robo1 ( Figure 3A). We have previously used this transgenic approach to perform structure-function analyses of Robo1 ectodomain elements (Brown et al. 2015(Brown et al. , 2018Reichert et al. 2016;Brown and Evans 2020). All of the constructs described herein include the endogenous signal peptide from Robo1 and a 4xHA epitope tag inserted directly upstream of the Ig1 domain, and were inserted at the same genomic location (28E7) to ensure equivalent expression levels between transgenes ( Figure 3A). As we have previously described, expression of full-length transgenic Robo1 from our rescue construct accurately reproduced the endogenous expression pattern of Robo1, with the HAtagged transgenic Robo1 protein detectable on longitudinal axons in the embryonic ventral nerve cord but largely excluded from commissural axon segments ( Figure 3B). We found that SAX-3 protein expressed from an equivalent transgene was also properly translated, expressed at similar levels to Robo1, and localized to axons in the embryonic CNS, but unlike Robo1 was not excluded from commissures ( Figure 3C). Homozygous embryos carrying two copies of our robo1::sax3 transgene in addition to two wild-type copies of the endogenous robo1 gene (þ, robo1::sax3) displayed slightly thickened commissures and a reduced distance between the longitudinal connectives and the midline, suggesting a mild dominant-negative effect caused by SAX-3 expression in otherwise wild-type embryos ( Figure 3C). Staining with an anti-FasII antibody confirmed a low level of ectopic midline crossing in these embryos (not shown). SAX-3 can partially rescue midline repulsion in the absence of robo1 To determine whether SAX-3 can substitute for Robo1 to promote midline repulsion during embryonic development, we introduced our robo1::sax3 transgene into a robo1 null mutant background ). These embryos display a combination of breaks in the longitudinal connectives (arrowhead with asterisk in P), thin or absent commissures (arrowhead with asterisk in Q), and fused commissures (arrowhead with asterisk in R). and examined midline repulsion using anti-FasII, which labels a subset of longitudinal axon pathways in the Drosophila embryonic CNS. In wild-type stages 16 and 17 Drosophila embryos, FasIIpositive axons do not cross the midline ( Figure 4A), while in robo1 null mutants they cross the midline ectopically in 100% of segments in the ventral nerve cord ( Figure 4B). Restoring expression of full-length Robo1 via the robo1::robo1 rescue transgene completely rescues midline repulsion in robo1 mutant embryos ( Figure 4C) (Brown et al. 2015). Expressing sax-3 in robo1's normal pattern partially restored midline repulsion in embryos lacking endogenous robo1 (robo1 1 , robo1::sax3) ( Figure 4D). In these embryos, FasII-positive axons crossed the midline in 64.8% of segments, which represents a significant rescue compared to robo1 null mutants (p < 0.0001 by Student's t-test). These results demonstrate that SAX-3 can signal midline repulsion in Drosophila embryonic neurons and can substitute for Drosophila robo1 in its endogenous context of preventing longitudinal axons from crossing the midline, although it cannot perform this role as effectively as robo1. Robo1/SAX-3 chimeric receptor variants We have noted two differences between Robo1 and SAX-3 when they are expressed in Drosophila embryonic neurons: first, Robo1 protein is largely restricted to longitudinal axons and excluded from commissures, while SAX-3 is expressed uniformly on both longitudinal and commissural axon segments; second, transgenic Robo1 is able to fully rescue midline crossing defects in a robo1 mutant background, while SAX-3 can only partially rescue these defects. We hypothesized that differences in axonal distribution might be due to sequence differences in the fibronectin repeats (Fn) or peri-membrane region, which have been implicated in commissural clearance and/or regulation of Drosophila Robo1 by Comm, which prevents surface localization of Robo1 protein in HAtagged full-length Robo1 (B) expressed from the robo1 rescue transgene is localized to longitudinal axon pathways (arrowhead) and excluded from commissural segments in both the anterior commissure (AC, white arrow) and posterior commissure (PC, black arrow). (B) Transgenic SAX-3 protein expressed from an equivalent transgene is properly localized to axons (arrowhead), but is not excluded from commissures (arrow with asterisk). Each of the Robo1/SAX-3 chimeric receptors is also properly localized to axons (arrowheads in D-I), but display varying degrees of commissural exclusion, from nearly complete exclusion similar to full-length Robo1 (robo1EP-sax3C, E, arrow), to partial exclusion (robo1E-sax3PC, D; robo1EC-sax3P, F; sax3EC-robo1P, I, arrows with asterisk), to no exclusion similar to full-length SAX-3 (sax3EP-robo1C, H, arrow with asterisk). Bar graph quantifies the ratio of HA levels on commissural axons versus longitudinal axons (average HA pixel intensity of longitudinal axons divided by average pixel intensity of commissural axons) for the genotypes shown in B-I (error bars show s.d.). Extent of commissural clearance for each receptor (C-I) was compared to full-length Robo1 (B) by a two-tailed Student's t-test with a Bonferroni correction for multiple comparisons (*P <0.000001 compared to robo1::robo1, n.s. ¼ not significant). n ¼ 3 embryos per genotype. precrossing commissural neurons (Gilestro 2008;Brown et al. 2018). Furthermore, we suspected that SAX-3's reduced ability to rescue midline repulsion might be due to differences in binding or responding to Drosophila Slit, which should be ectodomaindependent, or downstream signaling output, which would be cytodomain-dependent. To address these possibilities, we constructed a series of chimeric receptors combining the ectodomain (E), peri-membrane (P), and cytoplasmic (C) regions of Robo1 and SAX-3 ( Figures 1A and 3A). We defined the ectodomains (E) of Robo1 and SAX-3 as all sequences upstream of the 83 aa peri-membrane region of Robo1 defined by Gilestro (amino acids H891-W973) (Gilestro 2008) or the equivalent region of SAX-3 (amino acids N847-Q949), while the cytoplasmic domains (C) were defined as all sequences downstream of this region. By these definitions, all ectodomain structural elements (Ig1-5, Fn1-3) are included in the ectodomain (E) region, while all of the conserved cytoplasmic (CC) motifs are included in the cytoplasmic (C) domain. The peri-membrane (P) region includes the transmembrane helix in addition to 21 aa upstream and 33 aa downstream in both proteins ( Figure 1A). We then made transgenic lines for each chimeric receptor using our robo1 rescue transgene construct, and we examined the antibodies. Lower images show anti-FasII channel alone from the same embryos. FasII-positive axons cross the midline ectopically in robo1 mutant embryos (B, arrow with asterisk), and this defect is rescued completely by restoring Robo1expression via a genomic rescue transgene (C). Expression of SAX-3 in robo1's normal expression pattern partially rescues midline repulsion, and FasII-positive axons cross the midline in 64.8% of segments in these embryos (D). (E-J) robo1 mutant embryos expressing Robo1/SAX-3 chimeric receptors via the robo1 rescue transgene. Midline crossing defects are completely rescued in embryos expressing chimeric receptors that contain the ectodomain of Robo1 (robo1E-sax3PC, E; robo1EP-sax3C, F; robo1EC-sax3P, G) but not rescued by chimeric receptors that contain the SAX-3 ectodomain (sax3E-robo1PC, H; sax3EP-robo1C, I; sax3EC-robo1P, J; arrows with asterisks). Bar graph shows quantification of ectopic midline crossing defects in the genotypes shown in (A-J). Error bars indicate standard error. Number of embryos scored (n) is shown for each genotype. Full-length SAX-3 and Robo1/SAX-3 chimeric receptor rescue phenotypes (D, H-J) were compared to robo1 mutant embryos (B) by Student's t-test, with a Bonferroni correction for multiple comparisons (***P<0.0001, n.s. ¼ not significant). expression and localization of these HA-tagged chimeric receptors in Drosophila embryonic neurons, as well as their ability to rescue midline crossing defects in a robo1 null mutant background. Robo1 ectodomain and perimembrane regions both contribute to commissural clearance When expressed from our robo1 rescue construct, transgenic Robo1 protein is present on longitudinal axons but largely absent from commissures, while transgenic SAX-3 is uniformly expressed on both longitudinal and commissural axon segments. We found that our Robo1/SAX-3 chimeric receptors were all properly translated and localized to axons, and displayed varying levels of commissural clearance which correlated with the presence of the Robo1 ectodomain and peri-membrane regions. The variant that included both ectodomain (E) and peri-membrane (P) from Robo1 (robo1EP-sax3C) displayed nearly complete commissural clearance similar to full-length Robo1 ( Figure 3E), while the variant that included both ectodomain (E) and peri-membrane (P) from SAX-3 (sax3EP-robo1C) was not cleared at all, similar to fulllength SAX-3 ( Figure 3H). The variants that included either the ectodomain (E) or peri-membrane (P) from Robo1, but not both (robo1E-sax3PC, robo1EC-sax3P, sax3E-robo1PC, sax3EC-robo1P), displayed intermediate levels of commissural clearance (Figure 3, D, F, G, and I). These results suggest that both the ectodomain and peri-membrane regions of Robo1 contribute to its exclusion Figure 5 Comm-dependent downregulation of Robo1/SAX-3 chimeras. (A-P) Stage 16 Drosophila embryos carrying one copy of the indicated robo1 rescue transgenes along with elav-GAL4 alone (A-H) or elav-GAL4 and UAS-Comm (I-P). All embryos are stained with anti-HRP (magenta) to label the entire axon scaffold and anti-HA (green) to detect expression level of transgenic Robo proteins. All embryos also carry two wild-type copies of the endogenous robo1 gene. Embryos carrying one copy of the indicated robo1 transgenes along with elav-GAL4 display normal expression of the HA tagged transgenic receptor variants (A-H, arrows). Expression levels of full-length Robo1 (I) and Robo1EP-SAX3C (L) are strongly decreased when comm is expressed at high levels in all neurons (I, L, arrows with asterisk), but expression levels of the other chimeric receptors or full-length SAX-3 are unaffected by ectopic comm expression (J, K, M-P, arrows). In all backgrounds, comm misexpression causes ectopic midline crossing resulting in a robo1-like or slit-like midline collapse phenotype, as revealed by anti-HRP (I-P).Pairs of sibling embryos shown here (A and I; B and J; C and K; D and L; E and M; F and N; G and O; H and P) were stained in the same tube and imaged using identical confocal settings to allow accurate comparisons of HA levels between embryos with and without ectopic comm expression. Bar graph quantifies the ratio of HA levels for each receptor transgene without UAS-Comm (black bars) versus with UAS-Comm (gray bars). Error bars indicate s.d. HA levels þ/-UAS-Comm were compared for each transgene by a two-tailed Student's t-test with a Bonferroni correction for multiple comparisons (*P<0.001, **P<0.0000000001, n.s. ¼ not significant). n ¼ 2-4 embryos per genotype. from commissural axon segments, and that their contributions to commissural exclusion may be additive. This is consistent with our previously described Robo1DFn3 variant, which includes an intact peri-membrane region and also displays partial clearance from commissures (Brown et al. 2018), and suggests that whatever sequence(s) within Robo1 Fn3 contribute to commissural clearance are not conserved in SAX-3. Chimeric Robo1/SAX-3 receptors containing the Robo1 ectodomain can fully rescue midline repulsion in robo1 mutants We next asked whether our Robo1/SAX-3 chimeric receptors could rescue midline crossing defects caused by loss of robo1. To this end, as with our full-length robo1::sax3 transgene described above, we introduced each transgene into a robo1 null mutant background and quantified ectopic crossing of FasII-positive longitudinal axons in stages 16 and 17 embryos (Figure 4, E-J). We found that each of the receptor variants that included Robo1's ectodomain (robo1E-sax3PC, robo1EP-sax3C, and robo1EC-sax3P) could fully rescue midline crossing defects in robo1 mutants, equivalent to full-length Robo1 (Figure 4, E-G), while variants that included SAX-3's ectodomain (sax3E-robo1PC, sax3EP-robo1C, sax3EC-robo1P) could not rescue midline repulsion (Figure 4, H-J). These results suggest that SAX-3's reduced ability to rescue robo1-dependent midline repulsion is due to functional difference(s) between the Robo1 and SAX-3 ectodomains, perhaps their relative affinities for Drosophila Slit. These results also demonstrate that the SAX-3 cytoplasmic domain is able to activate downstream repulsive signaling in Drosophila neurons as effectively as the Robo1 cytoplasmic domain, as long as it is paired with Robo1's ectodomain. Robo1 ectodomain and perimembrane regions are both required for down-regulation by Comm Comm is a negative regulator of Slit-Robo signaling in Drosophila that prevents newly synthesized Robo1 protein from reaching the growth cone surface in precrossing commissural axons (Tear et al. 1996;Kidd et al. 1998b;Keleman et al. 2002;Gilestro 2008). comm is normally expressed transiently in commissural neurons as their axons are crossing the midline, and its transcription is rapidly extinguished after midline crossing (Keleman et al. 2002). Accordingly, forced expression of Comm in all neurons leads to a strong reduction in Robo1 protein levels and an ectopic midline crossing phenotype that resembles robo1 or slit loss of function mutants (Kidd et al. 1998b;Gilestro 2008;Brown et al. 2015Brown et al. , 2018Reichert et al. 2016). Notably, Robo1 variants that are resistant to endosomal sorting by Comm can still be antagonized by Comm via an uncharacterized but sorting-independent mechanism (Gilestro 2008). We and others have shown that the perimembrane region and Fn3 domain of Robo1 are each required for downregulation by Comm, but Robo1 variants lacking these regions are still subject to sorting-independent Comm antagonism (Gilestro 2008;Brown et al. 2018). Comm appears to be conserved only within insects, and Comm orthologs have not been identified in nematodes or other animal groups [though Ndfip and PRRG4 proteins have been proposed as functional analogs of Comm in mammals (Justice et al. 2017;Gorla et al. 2019)]. We therefore asked whether SAX-3 is sensitive to endosomal sorting or sorting-independent antagonism by Drosophila Comm when expressed in Drosophila neurons. We used elav-GAL4 and UAS-Comm transgenes to drive high levels of Comm expression in all neurons in embryos carrying one copy of our Robo1, SAX-3, or Robo1/SAX-3 chimeric receptor transgenes, then examined the effect on the expression levels of each transgene along with axon scaffold architecture using anti-HA and anti-HRP antibodies ( Figure 5). As we have previously described, Comm misexpression leads to a strong reduction in transgenic Robo1 protein as well as midline collapse of the axon scaffold, compared to sibling embryos carrying elav-GAL4 only, reflecting endosomal sorting and degradation of Robo1 protein in the presence of Comm ( Figure 5, A and I) (Brown et al. 2015(Brown et al. ,2018Reichert et al. 2016). In contrast, we found that Comm misexpression produces a strong midline collapse phenotype without altering SAX-3 levels in embryos carrying our robo1::sax3 transgene ( Figure 5, B and J). This suggests that SAX-3 is insensitive to endosomal sorting by Comm, but still subject to sorting-independent antagonism. Among our Robo1/SAX-3 chimeric receptor transgenes, all were insensitive to Comm sorting except the chimeric receptor that included the ectodomain and peri-membrane region of Robo1 (robo1EP-sax3C) ( Figure 5, D and L), supporting the idea that bothFn3 and perimembrane regions are required for endosomal sorting of Robo1 by Comm in vivo. Like full-length Robo1 and SAX-3, all of the chimeric receptor transgenes were sensitive to sorting-independent antagonism by Comm, as evidenced by the midline collapse phenotype caused by Comm misexpression in embryos carrying any of our rescue transgenes ( Figure 5, I-P). SAX-3 dependent midline repulsion in Drosophila neurons is hyperactive in comm mutants The above results demonstrate that Drosophila Comm is unable to downregulate C. elegans SAX-3 expression in Drosophila neurons, but SAX-3 is still sensitive to sorting-independent regulation by Comm when Comm is misexpressed in all neurons. In addition, embryos carrying the robo1::sax3 transgene are not commissureless (see Figures 3C and 4D), suggesting that Comm may be regulating SAX-3-mediated midline repulsion in these embryos. Alternatively, SAX-3 may be completely free of Comm inhibition in these embryos, and the lack of a commissureless phenotype in robo1::sax3 embryos may be due solely to the fact that SAX-3 cannot signal midline repulsion as efficiently as Robo1 in Drosophila neurons. To distinguish between these possibilities, we examined the effect of removing comm in embryos expressing either the robo1::r-obo1 or robo1::sax3 transgenes in place of endogenous robo1 (Figure 6, A-G). Amorphic comm mutant embryos display a strongly Comm phenotype, with few or no axons crossing the midline ( Figure 6D). This phenotype is robo1-dependent, as robo1, comm double mutant embryos phenocopy robo1 mutants ( Figure 6E) (Seeger et al. 1993). Thus, the commissureless phenotype seen in comm embryos is due to hyperactivity of endogenous robo1. robo1, comm double mutants carrying our robo1::robo1 rescue transgene also display a commissureless phenotype, indicating that transgenic Robo1 expressed from the rescue transgene is hyperactive in the absence of comm ( Figure 6F). Similarly, robo1, comm double mutants carrying our robo1::sax3 rescue transgene also display a strongly commissureless phenotype, although one that is slightly less severe than comm mutants alone or robo1, robo1::robo1; comm compound mutants ( Figure 6G). We infer from this phenotype that SAX-3 protein expressed from the robo1::sax3 transgene is normally subject to negative regulation by comm, and SAX-3-dependent midline repulsive signaling becomes hyperactive in the absence of comm. The difference in severity of the commisureless phenotypes in robo1, robo1::robo1; comm and robo1, robo1::sax3; comm embryos likely reflects the quantitative difference in signaling activity of Robo1 and SAX-3 in Drosophila neurons. Discussion In this study, we have used a transgenic approach in the Drosophila embryonic CNS to examine the evolutionary conservation of midline repulsive signaling activity between two members of the Robo family of axon guidance receptors: Drosophila melanogaster Robo1 and C. elegans SAX-3. Robo1 and SAX-3 were two of the first Robo family members to be described, and their similar protein structure and developmental roles suggested strong evolutionary conservation of midline repulsive signaling mechanisms across animal groups (Kidd et al. 1998a;Zallen et al. 1998). Here, we have directly examined this functional conservation by expressing SAX-3 in Drosophila embryonic neurons and testing its ability to signal midline repulsion as well as to substitute for Drosophila robo1 to regulate midline crossing during embryonic development. We show that C. elegans SAX-3 can prevent Drosophila axons from crossing the midline, presumably by signaling midline repulsion in response to Drosophila Slit, and can partially substitute for Drosophila Robo1 to properly regulate midline crossing of longitudinal axons in the Drosophila embryonic CNS. Using a series of chimeric receptors, we show that the SAX-3 cytoplasmic domain can act equivalently to the Robo1 cytoplasmic domain to signal midline repulsion in Drosophila neurons when combined with the Robo1 ectodomain, but reciprocal chimeras combining the SAX-3 ectodomain with the Robo1 cytodomain cannot effectively signal midline repulsion. We further show that SAX-3 is insensitive to endosomal sorting by Drosophila Comm, but is subject to sorting-independent antagonism by Comm. The SAX-3 and Robo1 cytodomains can act equivalently to signal midline repulsion in Drosophila axons Our Robo1/SAX-3 chimeric receptors reveal that the SAX-3 cytodomain can signal midline repulsion in Drosophila neurons when paired with the ectodomain of Robo1, at a level that is indistinguishable from the native Robo1 cytodomain. While this observation does not directly demonstrate that the signaling mechanisms downstream of Robo1 and SAX-3 are identical in fly and worm neurons, it does suggest that the SAX-3 cytodomain is capable of interacting with and activating the downstream signaling components necessary for midline repulsive signaling in fly neurons. Our sequence comparisons indicate that all four of the previously identified CC motifs (CC0-CC3) are present in the SAX-3 cytodomain, along with a third Abl phosphorylation site and anti-FasII (green) antibodies. Lower images show anti-HRP channel alone from the same embryos. The axon scaffold forms normally in embryos heterozygous for amorphic alleles of robo1 (A) or comm (B). In homozygous comm mutant embryos, commissural axons fail to cross the midline and the commissures do not form because endogenous Robo1 is hyperactive (D, arrowhead with asterisk). robo1, comm double mutant embryos (E) phenocopy robo1 mutants (C). The commissureless phenotype is reproduced in robo1, comm double mutant embryos carrying the robo1::robo1 transgene, indicating that the Robo1 protein expressed from the transgene is hyperactive in the absence of comm (F, arrowhead with asterisk). robo1, comm double mutants carrying the robo1::sax-3 transgene also display strongly reduced commissures in a majority of segments, indicating that SAX-3 is also hyperactive in embryos lacking comm (G, arrowhead with asterisk), while a minority of segments display thickened commissures (G, arrow). Table quantifies percentage of segments displaying each commissural phenotype in the embryos shown in (A-G). (in addition to the two in CC0 and CC1) that is outside of these four motifs. This supports the idea that all critical signaling elements are present in the SAX-3 cytodomain. Further, since the order of the CC2 and CC3 motifs are switched relative to the Robo1 cytodomain, and the spacing between these sequences is not identical in the two receptors, this indicates that the order and relative positions of these sequence elements is not critical for their signaling output. Long et al. have used a similar chimeric receptor approach to show that the cytodomains of the unrelated repulsive axon guidance receptors Derailed (Drl) and Unc5 can also substitute for the Robo1 cytodomain to rescue robo1dependent midline repulsion when combined with the Robo1 ectodomain (Long et al. 2016), suggesting that all three of these receptors may function through a common downstream signaling pathway. Our results indicate that the SAX-3 cytodomain can also activate this pathway. Ectodomain-dependent differences in midline repulsive output Both the structural ectodomain arrangement of 5 Ig þ 3 Fn domains and the Slit-binding sequences within the N-terminal Ig1 domain are highly conserved across Robo receptors in many bilaterian species, while the cytoplasmic domain sequences are much more divergent. Ectodomain-dependent differences in activities between Drosophila Robo paralogs have been described in contexts other than midline repulsion (Evans and Bashaw 2010;Evans et al. 2015), while differences between Drosophila Robo1's and Robo3's abilities to rescue midline repulsion in robo1 mutants has been attributed entirely to differences in their cytoplasmic domains (namely, the CC1-CC2 region of Robo1) (Spitzweck et al. 2010). However, the chimeric receptors described here indicate that the cytoplasmic domains of Drosophila Robo1 and C. elegans SAX-3 are functionally interchangeable in the context of midline repulsion in Drosophila neurons, while their ectodomains are not. One possibility is that there are quantitative differences in affinity for Drosophila Slit caused by sequence divergence within Ig1 (which is 46% identical between Robo1 and SAX-3; Figure 1). Although it is clear that SAX-3 can detect and respond to Drosophila Slit, it does so less efficiently than Drosophila Robo1, as seen in both the gain of function and robo1 mutant rescue experiments presented here. Another possibility is that, independent of Slit affinity, there may be ectodomain conformational arrangements or changes in response to Slit binding that are necessary for optimal signaling that SAX-3 does not share with Robo1. In either case, it is unclear why chimeras containing the SAX-3 ectodomain (which cannot rescue midline repulsion at all in robo1 mutants) would be less active than full-length SAX-3 (which can partially rescue midline repulsion). Perhaps there are quantitative effects of ectodomain/cytodomain compatibility that further reduce the repulsive signaling efficiency when the SAX-3 ectodomain is paired with the Robo1 cytodomain. Antagonism of SAX-3-dependent midline repulsion by Comm Comm is able to antagonize Drosophila Robo1 through two apparently distinct mechanisms: one that involves endosomal sorting of Robo1 and depends on both the peri-membrane and Fn3 regions of Robo1 (Keleman et al. 2002;Gilestro 2008;Brown et al. 2018), and a second, as yet uncharacterized sortingindependent mechanism (Gilestro 2008). It is unclear whether sorting-independent regulation of Slit-Robo1 repulsion by Comm is achieved through direct regulation of Robo, or through some other component(s) of the Slit-Robo1 pathway (Gilestro 2008). The epistatic relationship between comm and robo1 indicates that this regulation must occur at the level of robo1 or upstream (Seeger et al. 1993). Whatever the mechanism or level of action of this sorting-independent regulation, SAX-3 must also be sensitive to it, as removal of comm in robo1 1 ,[robo1::sax3] embryos produces a commissureless phenotype indicative of hyperactive Slit-Robo repulsion ( Figure 6G). There do not appear to be any comm orthologs present in C. elegans. Perhaps comm regulation does not depend on specific sequences in Robo1/SAX-3 or, alternatively, this regulation may rely on sequences or structural arrangements in Robo1 that are conserved in SAX-3 for other reasons. Data availability Drosophila strains and recombinant DNA plasmids are available upon request. The authors affirm that all data necessary for confirming the conclusions of the article are present within the article, figures, and tables.
9,466.8
2020-10-12T00:00:00.000
[ "Biology" ]
Coherent ψ( 2S ) photo-production in ultra-peripheral Pb–Pb collisions at We have performed the first measurement of the coherent ψ( 2S ) photo-production cross section in ultra- peripheral Pb–Pb collisions at the LHC. This charmonium excited state is reconstructed via the ψ( 2S ) → l + l − and ψ( 2S ) → J /ψ π + π − decays, where the J /ψ decays into two leptons. The analysis is based on an event sample corresponding to an integrated luminosity of about 22 μb − 1 . The cross section for coherent ψ( 2S ) production in the rapidity interval − 0 . 9 < y < 0 . 9 is d σ coh ψ( 2S ) / d y = 0 . 83 ± 0 . 19 (cid:2) stat + syst (cid:3) mb. The ψ( 2S ) to J /ψ coherent cross section ratio is 0 . 34 + 0 . 08 − 0 . 07 ( stat + syst ) . The obtained results are compared to predictions from theoretical models. Introduction Two-photon and photo-nuclear interactions at unprecedented energies can be studied in heavy-ion Ultra-Peripheral Collisions (UPC) at the LHC. In such collisions the nuclei are separated by impact parameters larger than the sum of their radii and therefore hadronic interactions are strongly suppressed. The cross sections for photon induced reactions remain large because the strong electromagnetic field of the nucleus enhances the intensity of the photon flux, which grows as the square of the charge of the nucleus. The physics of ultra-peripheral collisions is reviewed in [1,2]. Exclusive photo-production of vector mesons at high energy, where a vector meson is produced in an event with no other final state particles, is of particular interest, since it provides a measure of the nuclear gluon distribution at low Bjorken-x. Exclusive production of charmonium in photon-proton interactions at HERA [3][4][5], γ + p → J/ψ(ψ(2S)) + p, has been successfully modelled in terms of the exchange of two gluons with no net-colour transfer [6]. Experimental data on this process from HERA have been used to constrain the proton gluon distribution at low Bjorken-x [7]. Exclusive vector meson production in heavy-ion interactions is expected to probe the nuclear gluon distribution [8], for which there is considerable uncertainty in the low-x region [9]. Exclusive ρ 0 [10] and J/ψ [11] production has been studied in Au-Au collisions at RHIC. The exclusive photo-production can be either coherent, where the photon couples coherently to almost all the nucleons, or incoherent, where the photon couples to a single E-mail address<EMAIL_ADDRESS>nucleon. Coherent production is characterized by low transverse momenta of vector mesons (p T 60 MeV/c) where the target nucleus normally does not break up. However, the exchange of additional photons, radiated independently from the original one, may lead to the target nucleus breaking up or de-excite through neutron emission. Simulation models estimate this occurs in about 30% of the events [12]. Incoherent production is characterized by a somewhat higher transverse momentum of the vector mesons (p T 500 MeV/c). In this case the nucleus interacting with the photon breaks up but, apart from single nucleons or nuclear fragments in the very forward region, no other particles are produced besides the vector meson. We published the first results on the coherent photo-production of J/ψ in UPC Pb-Pb collisions at the LHC [13] in the rapidity region −3.6 < y < −2.6, which constrain the nuclear gluon distribution at Bjorken-x 10 −2 . Shortly afterwards, ALICE published a second paper measuring both the coherent and the incoherent J/ψ vector meson cross section at mid-rapidity [14], allowing the nuclear gluon distribution at Bjorken-x 10 −3 to be explored. The present analysis is performed in the same rapidity region with respect to the measurement reported in [14], and it is sensitive to Bjorken-x 10 −3 too. There are very few studies of photo-production of ψ(2S) off nuclei. Incoherent photo-production, using a 21 GeV photon beam off a deuterium target, has been studied in [15]; non-exclusive photoproduction, using bremsstrahlung photons with an average energy of 90 GeV off a 6 Li target, have been reported in [16]. However, no previous measurements of ψ(2S) coherent photo-production off nuclear targets have been reported in the literature. In this letter, results from ALICE on exclusive coherent photoproduction of ψ(2S) mesons at mid-rapidity in ultra-peripheral Pb-Pb collisions at √ s NN = 2.76 TeV are presented. The measured coherent ψ(2S) cross section and the ψ(2S)/J/ψ cross section ratio are compared to model predictions [17][18][19][20][21][22]. Detector description The main components of the ALICE detector are a central barrel placed in a large solenoid magnet (B = 0.5 T), covering the pseudorapidity region |η| < 0.9, and a muon spectrometer at forward rapidity, covering the range −4.0 < η < −2.5 [23]. Three central barrel detectors are used in this analysis. The ALICE Internal Tracking System (ITS) is made of six silicon layers, all of them used in this analysis for particle tracking. The Silicon Pixel Detector (SPD) makes up the two innermost layers of the ITS, covering pseudorapidity ranges |η| < 2 and |η| < 1.4, for the inner (radius 3.9 cm) and outer (average radius 7.6 cm) layers, respectively. The SPD is a fine granularity detector, having about 10 7 pixels, and is used for triggering purposes. The Time Projection Chamber (TPC) is used for tracking and for particle identification [24] and has an acceptance covering the pseudo-rapidity region |η| < 0.9. The Time-of-Flight detector (TOF) surrounds the TPC and is a large cylindrical barrel of Multigap Resistive Plate Chambers (MRPC) with about 150,000 readout channels, giving very high precision timing. The TOF pseudo-rapidity coverage matches that of the TPC. Used in combination with the tracking system, the TOF detector is used for charged particle identification up to a transverse momentum of about 2.5 GeV/c (pions and kaons) and 4 GeV/c (protons). In addition, the TOF detector is also used for triggering [25]. The analysis presented below also makes use of two forward detectors. The V0 counters consist of two arrays of 32 scintillator tiles each, covering the range 2.8 < η < 5.1 (V0-A, on the opposite side of the muon spectrometer) and −3.7 < η < −1.7 (V0-C, on the same side as the muon spectrometer) and positioned respectively at z = 340 cm and z = −90 cm from the interaction point. Finally, two sets of hadronic Zero Degree Calorimeters (ZDC) are located at 114 m on either side of the interaction point. The ZDCs detect neutrons emitted in the very forward and backward regions (|η| > 8.7), such as neutrons produced by electromagnetic dissociation of the nucleus [26] (see Section 3). Data analysis The event sample considered for the present analysis was collected during the 2011 Pb-Pb run, using a dedicated Barrel Ultra-Peripheral Collision trigger (BUPC), selecting events with the following characteristics: (i) at least two hits in the SPD detector; (ii) a number of fired pad-OR (N on ) in the TOF detector [25] in the range 2 ≤ N on ≤ 6, with at least two of them with a difference in azimuth, φ, in the range 150 • ≤ φ ≤ 180 • ; (iii) no hits in the V0-A and no hits in the V0-C detectors. The ψ(2S) → l + l − channel For the di-muon and di-electron decay channels the following selection criteria were applied: (i) a reconstructed primary vertex. The primary vertex position is determined from the tracks reconstructed in the ITS and TPC as described in Ref. [27]. The vertex reconstruction algorithm is fully efficient for events with at least one reconstructed primary charged particle in the common TPC and ITS acceptance; (ii) only two good tracks with at least 70 TPC clusters and at least 1 SPD cluster each. Moreover, particles originated in secondary hadronic interactions or conversions in the detector material, were removed using a distance of closest approach (DCA) cut. The tracks extrapolated to the reconstructed vertex should have a DCA in the beam direction DCA L ≤ 2 cm, and in the plane orthogonal to the beam direction DCA T ≤ 0.0182 + 0.0350/p 1.01 T , where p T is the transverse momentum in (GeV/c) [28]; (iii) at least one of the two good tracks selected in criterion (ii) should have p T ≥ 1 GeV/c; this cut reduces the background, while it marginally affects the genuine leptons from J/ψ decays; (iv) the V0 trigger required no signal within a time window of 25 ns around the collision time in any of the scintillator tiles of both V0-A and V0-C. Signals in both V0 detectors were searched offline in a larger window according to the prescription described in [14]; (v) the specific energy loss dE/dx for the two tracks is compatible with that of electrons or muons (see below); it is worth noting that the TPC resolution does not allow muon and charged pion discrimination; (vi) the two tracks have opposite charges. The optimization of the selection criteria to tag efficiently the ψ(2S) was tailored by using the STARLIGHT [17] event generator combined with the ALICE detector full simulation. About 950,000 coherent and incoherent events were simulated for each decay channel. The event total transverse momentum reconstruction is obtained adding the p T of the two leptons. The selection of coherent events requires a threshold on the reconstruction of the event total transverse momentum, obtained by adding the p T of the two decay leptons. Transverse momentum carried away by the bremsstrahlung photons reflects in a broadening of the event total p T . Bremsstrahlung effects are more important for the dielectron decay and the corresponding p T threshold has to be larger in this case. Consequently a p T cut p T < 0.15 GeV/c for di-muons and p T < 0.3 GeV/c for di-electrons: 98 (77)% of the coherent signal is retained for di-muons and di-electrons respectively. Fig. 1 (top panel) shows the invariant mass (left) and the p T distribution (right) for these decay channels. The p T distributions clearly show a coherent peak at low p T . No events are found with a transverse momentum exceeding 1.5 GeV/c, as expected for a negligible hadronic contamination, characterized by a much larger event p T . The number of ψ(2S) candidates are obtained by fitting the invariant mass distribution of both channels to an exponential function describing the underlying continuum and to a Crystal Ball function to extract the ψ(2S) signal. The Crystal Ball ψ(2S) resonance mass and width were left free, while the tail parameters (α and n) were fixed to the values obtained by Monte Carlo simulation. The mass (width calculated from the standard deviation) value from the fit is 3.664 ± 0.013 GeV/c 2 (22 ± 9 MeV/c 2 ) in good agreement with the known value of the ψ(2S) mass and compatible with the absolute calibration accuracy of the barrel. The obtained yield (see Table 1) was N yield = (18.4 ± 9.3). Table 1. According to STARLIGHT the fraction ( f I ) of incoherent over coherent events in the low p T region is 4.4% for di-muons and 16.6% for di-electrons. Another theoretical model, described in [22], predicts a much higher coherent over incoherent cross section ratio, Table 1 Summary of the main experimental results and correction parameters used in the cross section evaluation. The bottom line shows the cross section for the three ψ(2S) decay channels. resulting in a f I prediction 50% smaller. Taking the average of these two predictions, (3.3 ± 1.1)% for di-muons and (11.1 ± 3.4)% for di-electrons is obtained. The uncertainty was obtained by requiring the used value to agree with the two models within 1σ . The final f I (see Table 1) is the average of the f I for di-electrons and di-muons, weighted with the corresponding acceptance and efficiency (Acc × ε). The remaining background (N back ) was estimated studying the wrong-sign event sample, obtained by applying cuts (i) to (v). For di-muon and di-electron channels no wrongsign events were found in the invariant mass range considered and giving N coh ψ(2S) = 17.5 ± 9.0. The coherent ψ(2S) differential cross section can be written as: , (2) where (Acc × ε) ψ(2S) corresponds to the acceptance and efficiency as discussed above. BR(ψ(2S) → l + l − ) is the branching ratio for ψ(2S) decay into leptons [29], y = 1.8 the rapidity bin size, and L int the total integrated luminosity. These values are listed in Table 1. The systematic uncertainty on the yield for the dilepton channel is obtained by varying the bin size and by replacing the exponential with a polynomial to fit the γ γ process. In addition, the Crystal Ball function parameters can be also obtained by fitting a simulated sample made of ψ(2S) and γ γ event cocktail and then used to fit the coherent-enriched data sample too. By applying the different methods reported above, the maximum difference in the obtained yield is 12%: this value is used as systematic uncertainty on the yield. The STARLIGHT model predicts a dependence of the ψ(2S) cross section on the rapidity, giving a ≈ 10% variation over the rapidity range −0.9 < y < 0.9. In order to evaluate the systematic uncertainty on the acceptance coming from the generator choice, a flat dependence of dσ ψ(2S) /dy in the interval −0.9 < y < 0.9, as predicted by other models, was used. The relative differences in (Acc × ε) between the input shapes was 1.0%, and are taken into account in the systematic uncertainty calculation. The systematic uncertainty on the tracking efficiency was estimated by comparing, in data and in Monte Carlo, the ITS (TPC) hit matching efficiency to tracks reconstructed with TPC (ITS) hits only. The trigger efficiency was measured relying on a data sample collected in a dedicated run triggered by the ZDCs only. Events with a topology having the BUPC conditions, given at the beginning of Section 3, were selected. The resulting trigger efficiency was compared to that obtained by the Monte Carlo simulation, showing an agreement within +4.0% −9.0% . The e/μ separation was obtained by using two methods: a) a sharp cut in the scatter plot of the first lepton dE/dx as a function of the second lepton dE/dx, where all the particles beyond a given threshold are considered as electrons; b) using the average of the electron (muon) dE/dx and considering as electrons (muons) the particles within three sigmas from the Bethe-Block expectation. The difference between the two methods was used as an estimate of the systematic uncertainty, giving ±2%. The systematic uncertainty related to the application of the V0 offline decision (cut iv) on Section 3.1, was evaluated repeating the analysis with this cut excluded. This results in a more relaxed event selection, increasing the cross section by 6%. The integrated luminosity was measured using a trigger for the most central hadronic Pb-Pb collisions. The cross section for this process was obtained with a van der Meer scan [30], giving a cross section σ = 4.10 +0.22 −0.13 (syst) b [31]. The integrated luminosity for the BUPC trigger sample, corrected for trigger live time, was L int = 22.4 +0.9 −1.2 μb −1 , where the uncertainty is the quadratic sum of the cross section uncertainty quoted above and the trigger dead time uncertainty. An alternative method based on using neutrons detected in the two ZDCs was also used. The ZDC trigger condition required a signal in at least one of the two calorimeters, thus selecting single electromagnetic dissociation as well as hadronic interactions. The cross section for this trigger was also measured with a van der Meer scan [26]. The integrated luminosity obtained for the BUPC by this method is consistent with the one quoted above within 2.5%. The sources and the values of the systematic uncertainties are listed in Table 2. As a result in the rapidity interval −0.9 < y < 0.9 a cross section dσ coh ψ(2S) /dy = 0.76 ± 0.40(stat) +0. 12 −0.13 (syst) mb is obtained. The analysis criteria used to select these channels are similar to those described in Section 3.1, with the requirements on the track quality slightly relaxed to keep the efficiency at an acceptable level. Such a cut softening was allowed by the smaller QED background in four track events, compared to the channels described in Section 3.1. Selection (ii) is modified so that four good tracks with at least 50 TPC clusters each are required. In addition to cuts i) to vi), the invariant mass of di-muons (di-electrons) was required to match that expected by leptons from J/ψ decay, i.e. 3.0 < M π + π − μ + μ − < 3.2 GeV/c 2 for di-muons (2.6 < M π + π − e + e − < 3.2 GeV/c 2 for di-electrons). The acceptance and the efficiency were estimated with similar techniques. Due to the coupling to the photon, the ψ(2S) is transversely polarized. According to previous experiments [32], J/ψ and Incoherent (1). The background was estimated by looking at events with all the possible combination of wrong-sign tracks. One event was found in the di-muon sample and no events in the di-electron sample. The fraction of the incoherent sample contaminating the coherent sample was estimated as in Section 3.1, and was found to be 3.4% in the ψ(2S) → π + π − μ + μ − channel and 13.2% in the ψ(2S) → π + π − e + e − channel. The systematic uncertainty on the yield was obtained by using an alternative set of cuts. According to the kinematics of the ψ(2S) → π + π − J/ψ decay channel, pions are characterized by a small transverse momentum (p T < 0.4 GeV/c), while the lepton transverse momentum exceeds 1.1 GeV/c. Instead of selecting events where the di-lepton invariant mass is close to that of the J/ψ , events were selected according to the kinematics of the decay products of the ψ(2S). All the other cuts were kept as described in Section 3.1. Two alternative selections were considered: (i) a sample where both leptons have a transverse momentum larger than 1.1 GeV/c; and (ii) a sample without any decay product with transverse momentum in the range 0.4 < p T < 1.2 GeV/c. The ψ(2S) yield was unchanged for both these selections while a small change applies to the acceptance and efficiency in the π + π − J/ψ decay, giving a negligible systematic uncertainty. The relative difference in (Acc × ε) between the STARLIGHT rapidity shape and a flat rapidity one was 2.0% for ψ(2S) → π + π − J/ψ channel, and is taken into account in the systematic uncertainty calculation. As a result the obtained cross sections in the rapidity interval −0.9 < y < 0.9 are dσ coh ψ(2S) /dy = 0.81 ± 0.22(stat) +0.09 −0.10 (syst) mb for the ψ(2S) → π + π − J/ψ , J/ψ → μ + μ − channel and dσ coh Combining the cross sections The ψ(2S) coherent production cross sections reported in the Sections 3.1 and 3.2 (Fig. 2) were combined, using the statistical and the uncorrelated systematic uncertainty as a weight. Finally the correlated systematic uncertainty was added. Asymmetric uncertaintys were combined according to the prescriptions given in [33]. The average cross section in the rapidity interval −0.9 < y < 0.9 is dσ coh ψ(2S) /dy = 0.83 ± 0.19 stat + syst mb. Coherent production with nuclear break up or nucleus de-excitation followed by neutron emission In UPC one or both nuclei may get excited due to the exchange of additional photons. This excitation may lead to break up of the nucleus via emission of one or more neutrons. The neutron emission was measured by using the ZDC detector, for the events studied in the decay channel, ψ(2S) → l + l − π + π − . We found 20 events (71 +9 −11 )% with no neutrons on either side (0n, 0n), 8 events (29 +11 −9 )% with at least one neutron on either side (Xn), 7 events (25 +10 −8 )% with no neutron on one side and at least one neutron on the other one (0n Xn) and 1 event (4 +8 −3 )% with at least one neutron on both sides (Xn Xn). Uncertainties on the fraction are obtained assuming a binomial distribution. These fractions are in agreement with predictions by STARLIGHT [12] and RSZ [8], as shown in Table 3. Table 3 Number of events for different neutron emissions in the ψ(2S) → l + l − π + π − process. The ψ(2S) to J/ψ cross section ratio In order to compare the coherent ψ(2S) cross section to the previously measured J/ψ cross section [14], we report on the ψ(2S)/J/ψ cross section ratio. Many of the systematic uncertainties of these measurements are correlated and cancel out in the ratio. Since the analysis relies on the same data sample and on the same trigger, the systematic uncertainties for the luminosity evaluation, trigger efficiency, and dead time were considered as fully correlated. Several uncertainties, corresponding to the same quantity, measured at slightly different energies (corresponding to the different masses), are partially correlated, while the uncorrelated part is small. Namely, the systematic uncertainties for e/μ separation and the measurement of the neutron number are strongly correlated and hence can be neglected in the ratio. The systematic uncertainties connected to the signal extraction and the branching ratio are considered uncorrelated between the two measurements. The quadratic sum of these sources together with the statistic uncertainty was used to combine different channels in both measurements. For the combination of asymmetric uncertainties the prescription from reference [33] was used. The value of the ratio is (dσ coh ψ(2S) /dy)/(dσ coh J/ψ /dy) = 0.34 +0.08 −0.07 (stat + syst). Discussion We have previously measured the coherent photo-production cross section for the J/ψ vector meson at mid and forward rapidities [13,14]. The results showed that the measured cross section was in good agreement with models that include a nuclear gluon shadowing consistent with the EPS09 parametrization [9]. Models based on the colour dipole model or hadronic interactions of the J/ψ with nuclear matter were disfavoured. The ψ(2S) is similar to the J/ψ in its composition (cc) and mass, but it has a more complicated wave function as a consequence of it being a 2S rather than a 1S state, and has a larger radius. There is a consensus view about the presence of a node in the ψ(2S) wavefunction: few authors pointed out that this node gives a natural explanation of the ψ(2S) smaller cross section compared to the J/ψ one; in addition it was argued that the node may give strong cancellations in the scattering amplitude in γ -nucleus interactions [34,35]. In Pb-Pb collisions the poor knowledge of the ψ(2S) wave function as a function of the transverse quark pair separation d makes it difficult to estimate the nuclear matter effects. There are predictions by five different groups for coherent ψ(2S) production in ultra-peripheral Pb-Pb collisions; some of them published several different calculations (see Fig. 3). The model by Adeluyi and Nguyen (AN) is based on a calculation where the ψ(2S) cross section is directly proportional to the gluon distribution squared [18]. It is essentially the same model used by Adeluyi and Bertulani [36] to calculate the coherent J/ψ cross section, which was found to be in good agreement with the ALICE data, when coupled to the EPS09 shadowing parametrization. The calculations are done for four different parameterizations of the nuclear gluon distribution: EPS08 [37], EPS09 [9], HKN07 [38], and 76 TeV at −0.9 < y < 0.9. The uncertainty was obtained using the prescription from reference [33]. The theoretical calculations described in the text are also shown. MSTW08 [39]. The last one (MSTW08) corresponds to a scaling of the γ p cross section neglecting any nuclear effects (impulse approximation). It is worth noting they used for the ψ(2S) the same wave function used for the J/ψ . The model by Gay Ducati, Griep, and Machado (GDGM) [19] is based on the colour dipole model and is similar to the model by Goncalves and Machado for coherent J/ψ production [20]. The latter calculation could not reproduce the ALICE coherent J/ψ measurement. The new calculation has, however, been tuned to correctly reproduce the ALICE J/ψ result. The model by Lappi and Mantysaari (LM) is based on the colour dipole model [21]. Their prediction for the J/ψ was about a factor of two above the cross section measured by ALICE. The current ψ(2S) cross section has been scaled down to compensate for this discrepancy. The model by Guzey and Zhalov (GZ) is based on the leading approximation of perturbative QCD [22]. They used different gluon shadowing predictions, using the dynamical leading twist theory or the EPS09 fit. Finally, STARLIGHT uses the Vector Meson Dominance model and a parametrization of the existing HERA data to calculate the ψ(2S) cross section from a Glauber model assuming only hadronic interactions of the ψ(2S) [17]. This model does not implement nuclear gluon shadowing. It is worth noting that removing all nuclear effects in STARLIGHT gives a cross section for J/ψ production almost identical to the Adeluyi-Bertulani model, if the MSTW08 parametrization is used. The last one corresponds to a scaling of the γ -p cross section neglecting any nuclear effects, i.e. considering all nucleons contributing to the scattering (impulse approximation). Conversely, when applying the same procedure to the ψ(2S) vector meson production, the comparison shows that STARLIGHT cross section is 50% smaller with respect to the Adeluyi-Nguyen one. This result may indicate that the γ + p → ψ(2S) + p cross section is parametrized in a different way in the two models, due to the limited experimental data, making it difficult to tune the models. For J/ψ , a wealth of γ + p → J/ψ + p cross section data has been obtained by ZEUS and H1, while the process γ + p → ψ(2S) + p was measured by H1 at four different energies only. This makes it much harder to constrain the theoretical cross section to the experimental data. Since the effect of gluon shadowing decreases the vector meson production cross section, this may explain why the ψ(2S) STARLIGHT cross section is close to the AN-EPS09 model, while it is a factor of two larger for J/ψ . The coherent ψ(2S) photo-production cross section is compared to calculations from twelve different models in Fig. 3. Since a comprehensive model uncertainty is not provided by the model authors, the comparison with the experimental results is quantified by dividing the difference between the value of each model at y = 0 and the experimental result, by the uncertainty of the mea- to theoretical predictions. The ALICE ratio measured in Pb-Pb collisions is shown as well. The uncertainty was obtained using the prescription from reference [33]. sions. The uncertainty was obtained using the prescription from reference [33]. The predictions from different theoretical models are also shown. surement itself. The present measurement disfavours the EPS08 parametrization when implemented in the AN model and the GDGM models with a strong shadowing. Similarly the models that neglect any nuclear effect are disfavoured at a level between 1.5 and 3 sigmas. The systematic uncertainties on the cross section parametrization and the experimental statistical uncertainties do not allow a preference to be given between the models implementing moderate nuclear gluon shadowing (as AN-EPS09) and those taking into account Glauber nuclear effects only (as STARLIGHT). Fig. 4 shows the ψ(2S) to J/ψ cross section ratio measured in Pb-Pb collisions by ALICE and those obtained in pp collisions by CDF [40], and in pp collisions by LHCb [41]. Both STARLIGHT and the GDGM model predict correctly the experimental pp results. The figure also shows the ratio measured by H1 in γ p collisions. The H1 result is compatible with the pp measurements, while the ALICE point is 2σ larger than the average of the pp measurements, although still with sizable uncertainties. This difference may indicate that the nuclear effects and/or the gluon shadowing modify the J/ψ and the ψ(2S) production in a different way, since other effects, as the different photon flux, due to the larger ψ(2S) mass, could not explain such a difference. Conclusions We performed the first measurement of the coherent ψ(2S) photo-production cross section in Pb-Pb collisions, obtaining dσ coh ψ(2S) /dy = 0.83 ± 0.19 stat + syst mb in the interval −0.9 < y < 0.9. This result disfavours models considering all nucleons contributing to the scattering and those implementing strong shadowing, as EPS08 parametrization. The ratio of the ψ(2S) to J/ψ cross section ratio in the rapidity interval −0.9 < y < 0.9 is 0.34 +0.08 −0.07 (stat + syst). Most of the models underpredict this ratio by 2-2.5 σ . The current models of the ψ(2S) production in ultra-peripheral collisions require further efforts; the data shown in the present analysis may help to improve the understanding of this process and to refine the theory behind the exclusive vector meson photo-production. Acknowledgements The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the out-
6,868.4
2015-01-01T00:00:00.000
[ "Physics" ]
Directed Flow in Microscopic Models in Relativistic A + A Collisions † Evolution of directed flow of charged particles produced in relativistic heavy-ion collisions at energies 4 ≤ √ s ≤ 19.6 GeV is considered within two microscopic transport models, ultra-relativistic quantum molecular dynamics (UrQMD) and quark-gluon string model (QGSM). In both models, the directed flow of protons changes its sign at midrapidity from antiflow to normal flow with decreasing energy of collisions, whereas the flows of mesons and antiprotons remain antiflow-oriented. For lighter colliding systems, such as Cu+Cu or S+S, changing of the proton directed flow occurs at lower bombarding energies and for more central topologies compared to a heavy Au+Au system. The differences can be explained by dissimilar production zones of different hadrons and by the influence of spectators. Directed flows of most abundant hadronic species at midrapidity are found to be formed within t = 10–12 fm/c after the beginning of nuclear collision. The influence of hard and soft mean-field potentials on the directed flow is also studied. Introduction The collective flow was proposed as a measure of expansion of hadrons, produced in relativistic heavy-ion collisions, in both longitudinal and transverse directions in [1,2].The first experimental measurement of the flow was done by the Plastic Ball collaboration [3] at Bevalac.Since then, the intensive study of this phenomenon by theoreticians and the experimentalists has begun.Initially, the collective flow in transverse plane, which is orthogonal to the beam axis, was decomposed onto the bounce-off flow projected on the impact parameter axis and the squeeze-out flow orthogonal to the reaction plane, see e.g., [4]. The method of decomposition of the transverse flow in infinite Fourier series was proposed in [5,6].It states that the invariant cross section can be written as here y is rapidity, p T is particle transverse momentum, φ is the azimuth between the p T and the participant event plane, and Ψ n is the azimuthal angle of the event plane of n-th flow component. The averaging in Equation ( 2) is performed over all hadrons in a single event and over all events.The first term in Equation ( 1) represents the isotropic flow, whereas the sum is related to the anisotropic flow.The first components of the latter are known as directed, v 1 , elliptic, v 2 , triangular, v 3 , flow, and so forth.It appears that in the energy range between √ s = 4 GeV and √ s = 10 GeV directed flow of protons at midrapidity, v p 1 | y=0 , changes its sign from "normal" (for definition, see below) to "antiflow", whereas the directed flows of mesons and antiprotons remain antiflow-oriented.Also, it is well-known that the directed flow of hadrons should drop and even vanish in the vicinity of first order deconfinement phase transition [7][8][9].This circumstance explains the interest to the study of directed flow at beam energy scan (BES) program at RHIC, at CERN SPS, and at future accelerators NICA JINR and FAIR GSI. In the present study we employ two microscopic transport models, ultra-relativistic quantum molecular dynamics (UrQMD) [10,11] and quark-gluon string model (QGSM) [12][13][14], to investigate heavy-ion collisions in the energy range 4 ≤ √ s ≤ 19.6 GeV.Of particular interest is the models ability to reproduce the basic peculiarities in the development of the directed flow of identified hadrons.The paper is organized as follows.Primary features of both models are sketched in Section 2. Section 3 presents the results of our study of the energy-, mass-, and y-dependence of v 1 of different hadron species.Time evolution of the v 1 at midrapidity is compared to the directed flow of hadrons frozen out at different times.The influence of the mean-field potentials on the development of v 1 is also studied.Conclusions are drawn in Section 4. Similarities and Differences Between Microscopic Models Both UrQMD [10,11] and QGSM [12][13][14] are Monte Carlo event generators designed for description of relativistic hh, hA and A+A interactions.The multiparticle production takes place via formation and fragmentation of specific colored objects, strings, stretching uniformly between the quarks, diquarks, and their antistates.The string tension is about κ ≈ 1 GeV/fm, and strings break into hadrons via the Schwinger-like mechanism of (di)quark-anti(di)quark formation.However, both mechanisms of the string formation and the string fragmentation in the models are different. There are two possible methods of string excitation.UrQMD employs the longitudinal excitation of strings which is characteristic for all Lund-based string models [15].Here the mass of the string arises from the momentum transfer, and the strings are stretching between the constituents belonging to the same hadron.Also, for hard collisions with the momentum transfer larger than 1.5 GeV/c UrQMD employs PYTHIA [16].QGSM utilizes the color exchange mechanism [17], in which the constituents at the string ends belong to different hadrons.The variety of subprocesses in the latter case is much richer compared to the longitudinal excitation.For the string fragmentation process the string models utilize three possible schemes.The first scenario, suggested by the Lund group [15], implies that the string always splits into a sub-string and a particle on the mass shell at the end of the fragmenting string.This option is realized in UrQMD.In the second scheme the string splits into two sub-strings according to the area law [18].The third option is provided by the Field-Feynman mechanism [19].Here the string fragmentation takes place independently from both ends of the string.This scenario is employed in QGSM. Both models utilize available experimental information, such as hadron cross sections, resonance widths and decay modes.For the description of hadron-nucleus and nucleus-nucleus collisions hadronic cascade is used.Particle propagation between the collisions is governed by Hamilton equations of motion.To obey the uncertainty principle, newly produced particles can interact only after the certain formation time.The Pauli principle is implemented by blocking the final state if the outgoing phase space is already occupied. Main Results To investigate the basic features of the directed flow of pions and protons in microscopic models we opted for three bombarding energies, √ s = 4 GeV, 7.7 GeV and 19 GeV, three systems of colliding nuclei, Au+Au, Cu+Cu and S+S, and three centrality intervals, σ/σ geo = 0-10%, 10-40% and 40-80%.We consider directed flow of protons, v p 1 (y), first.These distributions are displayed in Figure 1.According to definition [20,21], if the product of particle momentum along x-axis and rapidity is positive, p x • y > 0, the flow is considered as "normal".In the opposite case, i.e., if p x • y < 0, we call it "antiflow".Note that at ultrarelativistic energies, the directed flow of both protons and pions at midrapidity is practically zero.Let us see how the directed flow of protons in Au+Au collisions is changing with decreasing collision energy from 19 GeV to 4 GeV.Firstly, calculations are done with the QGSM.For peripheral collisions v p 1 (y) has a characteristic wiggle structure, see bottom plots in Figure 1.It demonstrates weak antiflow at midrapidity.For semicentral Cu+Cu and S+S collisions with centrality 0-10% one sees antiflow at √ s = 19.6GeV, however, at √ s = 7.7 GeV the directed flow of protons is already normally elongated.This transformation is happening because the flying away baryon-rich remnants of colliding nuclei are closer to midrapidity zone, and the directed flow of charged hadrons in the remnants is developed in normal direction.Moreover, transition from antiflow to normal flow occurs earlier (in energy scale) in heavy-ion collisions compared to the light-ion ones.The explanation of this effect is as follows [22][23][24][25].Hadrons are produced more copiously in heavy-ion collisions than in light-ion collisions at the same centrality range.Hadrons emitted in the direction of nuclear remnants will interact further thus acquiring an extra momentum.These hadrons will be pushed from the midrapidity area to higher rapidity regions.For a light-ion system with lower multiplicity of secondary hadrons the loss of a few hadrons emitted in normal flow direction would be more noticeable compared to the heavy-ion system.It is worth noting that this effect is opposite to the reduction (or softening) of directed flow caused by the quark-gluon plasma (QGP) formation.The QGP is expected to be produced in (semi)central heavy-ion collisions rather than in light-ion ones.Therefore, the softening of proton directed flow at y = 0 should set in earlier in Au+Au collisions compared to Cu+Cu or S+S collisions at the same energy. For pions the picture is more permanent, as shown in Figure 2.Here the directed flow of pions at midrapidity demonstrates a distinct antiflow behavior for both heavy-and light-ion colliding systems, at all three bombarding energies, and for all three centrality bins. Similar behavior is also observed in UrQMD calculations.To see the change of the proton flow direction clearly we present in Figure 3 v p 1 (y) for S+S collisions at √ s = 3.5 GeV, 7.7 GeV and 11.6 GeV, respectively.At lower collision energies the peaks, associated with the proton flow in the nuclei remnants areas, become closer to the midrapidity zone.Thus, protons with normal flow also start to determine the directed flow of protons at midrapidity.Directed flow of pions in these reactions is shown in Figure 4.It demonstrates a clear antiflow, which is slightly increasing with decreasing bombarding energy. These features of the hadronic flow definitely need further investigation.Figure 5 (upper row) presents the snapshot of particle directed flows and densities for protons, antiprotons and charged pions in Au+Au collisions with the impact parameter b = 6 fm at √ s = 7.7 GeV after 10 fm/c of beginning of the collision.The whole space was subdivided into cells with volume V = 3 fm 3 .The arrows indicate the collective velocities calculated for each cell.Density contours show that spatial distributions of p, p and π ± are quite different.Protons are strongly influenced by the spectators, whereas the distributions of pions and antiprotons are more symmetric w.r.t.z = 0.The arising directed flow is a result of superposition of the partial flows in the cells, each having either positive (normal flow) or negative (antiflow) sign.Evolution of the directed flow of these three particle species at midrapidity is displayed in Figure 5 (bottom row).Although both normal flow and antiflow of all species are quickly developed within the first 1.5 fm/c, their resulting flow is weak compared to both flow and antiflow, components.It looks like the development of directed flow takes time longer than t = 14 fm/c even at midrapidity.Our next investigation concerns the evolution of partial differential flows of protons, charged pions and kaons, and Lambdas at y = 0. Results of the calculations for Au+Au collisions with b = 6 fm at √ s = 11.6 GeV are shown in Figure 6.Each plot displays two distributions.The first distribution, labeled as "time slices", presents the partial v 1 of a certain hadron species at midrapidity taken from t = 2 fm/c with the time step 1 fm/c, i.e., at t = 2 fm/c, 3 fm/c, 4 fm/c and so forth.However, not all of these hadrons contribute to final midrapidity flow after the freeze-out of particles.The second distribution, therefore, represents the v 1 | y=0 of hadrons from the final spectrum frozen out at t = 2.5 ± 0.5 fm/c, 3.5 ± 0.5 fm/c, etc.For all particle species the two curves converge to each other at 12 ≤ t ≤ 15 fm/c.One can see from this comparison that baryons, which are decoupled from the system at times 4 ≤ t ≤ 8 fm/c, carry quite strong directed flow at midrapidity.However, if we would stop all interactions between the hadrons at this moment, the flow developed by the baryons will be weak.It is also weaker than the final directed flow of baryons after the particle freeze-out.For mesons the picture is similar to that with baryons but in terms of the antiflow.Both the pions and kaons emitted earlier carry stronger antiflow at midrapidity, whereas the v 1 | y=0 of these particles is slowly increasing up to quite late times.The next problem is the investigation of the influence of mean fields on the strength of directed flow of identified hadrons.Recall that the mean-field potential in UrQMD consists of three parts [10], namely, Yukawa potential, Coulomb potential and Skyrme potential: where Parameters of the hard and the soft potentials used in the calculations are listed in Table 1.Note that direct comparison of results of model calculations with the experimental data is a very non-trivial task.One has to know not only the (pseudo)rapidity and transverse momentum cuts, but also binning of v 1 (y) distributions and rapidity interval chosen for the fit, determine the centrality according the multiplicity distribution, etc.Thus, the main goal of our present paper is to study the general trends and present a model description of hadron directed flow rather than fine tuning the free parameters of the models.To study the influence of hard and soft mean field potentials on the directed flow of identified hadrons and energy dependence of the flow, we generated one million minimum bias Au+Au collisions at center-of-mass energies √ s = 4, 7.7, 11.5, 19.6, 32, 62.4 and 200 GeV, respectively.For each energy below √ s = 19.6GeV one run was with the hard potential, another was with the soft potential, and the third one was without the mean fields.Multiplicity-based centrality separation was performed and semicentral (10-40%) events were chosen for further analysis.The midrapidity slope of directed flow was determined within the interval |y| ≤ 0.5.Because the position of the event plane (EP) in the experiment is unknown, we employed also the experimental procedure [26] of the event plane restoration to estimate the possible systematic errors.Results obtained for baryons, namely protons, antiprotons, Lambdas and antiLambdas, are shown in Figure 7.For comparison, experimental data of the STAR collaboration [26,27] are also plotted onto the UrQMD calculations.Calculated values of dv 1 /dy| y=0 as a function of √ s for charged pions and charged kaons are displayed in Figure 8.One can see in Figure 7 that for protons and Lambdas the version without the mean fields provide fair quantitative description of the data.Calculations with hard potential (stiff equation of state, EOS1) provide too strong flow and antiflow of these particles, whereas the soft potential (EOS2) makes the flow weaker.For antibaryons, the soft potential provides better quantitative agreement with the experiment at √ s = 11.5 GeV but cannot match the data at √ s = 7.7 GeV.Nevertheless, the changing of the sign from normal flow to antiflow for baryons within the interval 7.7 ≤ √ s ≤ 11.5 GeV and decrease of the antibaryon antiflow is reproduced.For charged mesons, both hard and soft potentials do not play a decisive role at √ s ≥ 11.5 GeV.At lower energies all calculations predict stronger antiflow for all mesons except positive kaons.This interesting problem needs further investigation.Time evolution of partial directed flows of identified hadrons at midrapidity takes about 10-15 fm/c at collision energies around 11.6 GeV.These distributions can be decomposed onto two parts representing normal flow and antiflow.Both parts have quite substantial magnitudes, whereas the resulting flow is relatively weak.Calculations with hard and soft mean-field potentials of Au+Au collisions at √ s ≤ 19.6 GeV show that these potentials influence mainly directed flow of baryons rather than that of mesons.The model calculations qualitatively reproduce the experimentally-observed trends.Quantitative description of the data in the intermediate energy range, however, is a complex problem demanding further investigation. Figure 2 . Figure 2. The same as Figure 1 but for the directed flow of π ± . Figure 4 . Figure 4.The same as Figure 3 but for the directed flow of π ± . Figure 5 . Figure 5. Upper row: The snapshot at time t = 10 fm/c of hadron densities (contour plots) and collective velocities (arrows) of the cells, each with volume V = 3 fm 3 , for protons (left), antiprotons (middle), and charged pions (right) in UrQMD calculations of Au+Au collisions at √ s = 7.7 GeV with b = 6 fm.Bottom row: The time development of total directed flow (solid line) and partial flows in normal flow (red dashed line) and antiflow (blue dash-dotted line) directions of protons (left), antiprotons (middle), and charged pions (right) in these reactions. Figure 6 . Figure 6.Time evolution of directed flow (green dashed curves) of p, π ± , K ± , and Λ at midrapidity in UrQMD calculations of Au+Au collisions with b = 6 fm at √ s = 11.6 GeV.Directed flow carried by hadrons already frozen at a certain time is shown by red solid curves.Dots represent the final slopes of the hadron directed flows at midrapidity after the end of A+A collisions. Figure 7 . Figure 7.The slope of directed flow of p, p, Λ, and Λ at y = 0 as a function of √ s in UrQMD calculations with hard (crosses) and soft (diamonds) mean-field potentials, and without the mean fields (circles) of Au+Au collisions with centrality 10-40%.Squares denote the experimental data from [26,27]. Figure 8 . Figure 8.The same as Figure 7 but for the directed flow of π + , π − , K + , and K − , respectively. Two microscopic transport models, UrQMD and QGSM, were used to study general features of directed flow of identified hadrons in heavy-ion collisions in the energy range 4 GeV ≤ √ s ≤ 19.6 GeV.Although the mechanisms of string excitation and string fragmentation in the models are different, both UrQMD and QGSM indicate that directed flow of protons and Lambdas is positive (normal flow) in central and semicentral collisions at √ s ≤ 7.7 GeV.It is reduced and develops an antiflow with increasing bombarding energy and as the reaction becomes more peripheral.Moreover, this effect is stronger in light-ion collisions with the same centrality.This feature can be explained by the influence of dense baryon-rich remnants of colliding nuclei.Directed flows of antibaryons and mesons demonstrate stable weak antiflow behavior for all energies and all centralities.
4,142.6
2019-03-05T00:00:00.000
[ "Physics" ]
Resolution a la Kronheimer of $\mathbb{C}^3/\Gamma$ singularities and the Monge-Ampere equation for Ricci-flat Kaehler metrics in view of D3-brane solutions of supergravity We analyze the relevance of the generalized Kronheimer construction for the gauge-gravity correspondence. We study the general structure of IIB supergravity D3-brane solutions on crepant resolutions $Y$ of singularities $\mathbb{C}^3/\Gamma$ with $\Gamma$ a finite subgroup of $SU(3)$. Next we concentrate on another essential item for the D3-brane construction, i.e., the existence of a Ricci-flat metric on $Y$, with particular attention to the case $\Gamma=\mathbb{Z}_4$. We conjecture that on the exceptional divisor the Kronheimer K\"ahler metric and the Ricci-flat one, that is locally flat at infinity, coincide. The conjecture is shown to be true in the case of the Ricci-flat metric on ${\rm tot} K_{{\mathbb WP}[112]}$ that we construct, which is a partial resolution of $\mathbb{C}^3/\mathbb{Z}_4$. For the full resolution we have $Y=\operatorname{tot} K_{\mathbb{F}_{2}}$, where $\mathbb{F}_2$ is the second Hizebruch surface. We try to extend the proof of the conjecture to this case using the one-parameter K\"ahler metric on $\mathbb{F}_2$ produced by the Kronheimer construction as initial datum in a Monge-Amp\`{e}re (MA) equation. We exhibit three formulations of this MA equation, one in terms of the K\"ahler potential, the other two in terms of the symplectic potential; in all cases one can establish a series solution in powers of the fiber variable of the canonical bundle. The main property of the MA equation is that it does not impose any condition on the initial geometry of the exceptional divisor, but uniquely determines all the subsequent terms as local functionals of the initial datum. While a formal proof is still missing, numerical and analytical results support the conjecture. As a by-product of our investigation we have identified some new properties of this type of MA equations that we believe to be so far unknown. Introduction We report on the advances obtained on the following special aspect of the gauge/gravity correspondence, within the context of quiver gauge-theories [1,2,3,4,5]: the relevance of the generalized Kronheimer construction [6,7] for the resolution of C 3 /Γ singularities. In particular, after an introduction about D3-brane supergravity solutions, we consider, within this framework, the issues of the construction of a Ricci-flat metric on the smooth resolution Y Γ of C 3 /Γ. We begin with the general problem of establishing holographic dual pairs whose members are A) a gauge theory living on a D3-brane world volume, B) a classical D3-brane solution of type IIB supergravity in D=10 supergravity. Gauge theories based on quiver diagrams have been extensively studied in the literature [1,2,3,4,5] in connection with the problem of establishing holographic dual pairs as described above. Indeed the quiver diagram is a powerful tool which encodes the data of a Kähler quotient describing the geometry of the six directions transverse to the brane. The linear data of such a Kähler (or HyperKähler) quotient are the menu of the dual supersymmetric gauge theory, as they specify: 1. the gauge group factors, 2. the matter multiplets, 3. the representation assignments of the latter with respect to the gauge group factors. The possibility of testing the holographic principle [8,9,10,11,12] and resorting to the supergravity side of the correspondence in order to perform, classically and in the bulk, quantum calculations that pertain to the boundary gauge theory is tightly connected with the quiver approach whenever the classical brane solution has a conformal point corresponding to a limiting geometry of the following type: In the above equation by AdS p+2 we have denoted anti de Sitter space in p + 2-dimensions while SE D−p−2 stands for a Sasaki-Einstein manifold in D − p − 2-dimensions [13]. Within the general scope of quivers a special subclass is that of McKay quivers that are group theoretically defined by the embedding of a finite discrete group Γ in an n-dimensional complex unitary group Γ → SU(n) (1.2) and are associated with the resolution of C n /Γ quotient singularities by means of a Kronheimer-like construction [15,16,17]. The case n = 2 corresponds to the HyperKähler quotient construction of ALE-manifolds as the resolution of the C 2 /Γ singularities, the discrete group Γ being a finite Kleinian subgroup of SU (2), as given by the ADE classification 1 . The case n = 3 was the target of many interesting and robust mathematical developments starting from the middle of the nineties up to the present day [19,20,21,22,23,24,25,26,27]. The main and most intriguing result in this context, which corresponds to a generalization of the Kronheimer construction and of the McKay correspondence, is the group theoretical prediction of the cohomology groups H (p,q) Y Γ [3] of the crepant smooth resolution Y Γ [3] of the quotient singularity C 3 /Γ. Specifically, the main output of the generalized Kronheimer construction for the crepant resolution of a singularity C 3 /Γ is a blowdown morphism: where Y Γ [3] is a noncompact smooth three-fold with trivial canonical bundle. On such a complex three-fold a Ricci-flat Kähler metric ds 2 RFK (Y Γ [3] ) = g RFK αβ dy α ⊗ dy β (1.4) with asymptotically conical boundary conditions (Quasi-ALE) is guaranteed to exist (see e.g. [28], Thm. 3.3), although it is not necessarily the one obtained by means of the Kähler quotient. According to the theorem proved by Ito-Reid [19,22,23] and based on the concept of age grading 2 , the homology cycles of Y Γ [3] are all algebraic and its non vanishing cohomology groups are all even and of type H (q,q) . We actually have a correspondence between the cohomology classes of type (q, q) and the discrete group conjugacy classes with age-grading q, encoded in the statement: dim H 1,1 Y Γ [3] = # of junior conjugacy classes in Γ; dim H 2,2 Y Γ [3] = # of senior conjugacy classes in Γ; all other cohomology groups are trivial (1.5) The absence of harmonic forms of type (2,1) implies that the three-folds Y Γ [3] admit no infinitesimal deformations of their complex structure and is also a serious obstacle, as we discuss in section 2 to the construction of supergravity D3-brane solutions based on Y Γ [3] that have transverse three-form fluxes. There is however a possible way out that is provided by the existence of mass-deformations. This is the main point of another line of investigation which we hope to report on soon. If the McKay quiver diagram has certain properties, the superpotential W(Φ) on the gauge-theory side of the correspondence can be deformed by well defined mass-terms and, after (gaussian) integration of the massive fields, the McKay quiver is remodeled into a new non-McKay quiver associated with the Kähler or HyperKähler quotient description of smooth Kähler manifolds, like the resolved conifold, that admit harmonic (2, 1)-forms and sustain adequate D3-brane solutions. On the basis of the above remarks we can spell-out the scope of the present paper in the following way. The embedding (1.2) determines in a unique way a McKay quiver diagram which determines: 1. the gauge group F Γ , 2. the matter field content Φ I of the gauge theory, 3. the representation assignments of all the matter fields Φ I , 4. the possible (mass)-deformations of the superpotential W(φ), 5. the Ricci-flat metric on Y can be inferred, by means of the Monge-Ampère equation, from the Kähler metric on the exceptional compact divisor (in those cases where it exists) in the resolution of C 3 /Γ, which, on its turn, is determined by the McKay quiver through the Kronheimer construction. In relation with point 4) of the above list, to be discussed in a future paper, for the case C 3 /Z 4 we anticipate the following. By means of gaussian integration we get a new quiver diagram that is not directly associated with a discrete group, yet it follows from the McKay quiver of Γ in a unique way. The group theoretical approach allows us to identify deformations of the superpotential and introduce new directions in the moduli space of the crepant resolution. In this sense, we go beyond the Ito-Reid theorem. Both physically and mathematically this is quite interesting and provides a new viewpoint on several results, some of them well known in the literature. Most of the latter are based on cyclic groups Γ and rely on the powerful weapons of toric geometry. Yet the generalized Kronheimer construction applies also to non abelian groups Γ ⊂ SU(3) and so do the cohomological theorems proved by Ito-Reid, Ishii and Craw. Hence available mass-deformations are encoded also in the McKay quivers of non abelian groups Γ and one might explore the geometry of the transverse manifolds emerging in these cases. 2 For a recent review of these matters within a general framework of applications to brane gauge theories see [6,7]. In relation with point 5) of the above list, fully treated in this paper for the same case C 3 /Z 4 , we stress that, although the Kronheimer metric on Y Γ [3] is not Ricci-flat, yet its restriction to the exceptional divisor provides the appropriate starting point for an iterative solution of the Monge Ampère equation which determines the Ricci-flat metric. In view of the above considerations we can conclude that the McKay quiver diagram does indeed provide a determination of both sides of a D3-brane dual pair, the gauge theory side and the supergravity side. In this paper we focus on two paradigmatic examples, namely C 3 /Z 3 (with Z 3 diagonally embedded in SU(3)) and C 3 /Z 4 . The latter case was studied in depth in a recent publication [7]. Relying on those results here we concentrate on the issue of the Ricci-flat Kähler metric. While in the case of HyperKähler quotients (yielding N = 2 gauge theories and corresponding to the original Kronheimer construction of C 2 /Γ resolutions) the Kronheimer metric is automatically Ricci-flat, in the case of Kähler quotients and of the generalized Kronheimer construction of C 3 /Γ resolutions, the Kronheimer metric is not Ricci-flat and one needs to resort to different techniques in order to find a Ricci-flat metric on the same three-fold Y Γ [3] that is algebraically determined by the construction. The fascinating scenario that emerges from our combined analytical and numerical results is summarized in the following discussion. From the point of view of complex algebraic geometry the resolved variety Y Γ [3] is in many cases and, in particular in those here analyzed, the total space of a line-bundle over a compact complex two-fold, which coincides with the exceptional divisor ED of the resolution of singularities: π −→ ED [2] ; ∀ p ∈ ED [2] : π −1 (p) ∼ C (1. 6) In the paradigmatic example, recently studied in [7], of the resolutionà la Kronheimer of the C 3 /Z 4 singularity, ED is indeed the compact component of the exceptional divisor emerging from the blow-up of the singular point in the origin of C 3 and it happens to be the second Hirzebruch surface F 2 . Other cases are possible. Hirzebruch surfaces are P 1 bundles over P 1 , so that ED [2] π −→ P 1 ; ∀ p ∈ P 1 : π −1 (p) ∼ P 1 (1.7) This double fibration is illustrated in a pictorial fashion in fig.1. Given this hierarchical structure, the sought for Ricci-flat metric is constrained to possess the following continuous isometry group: G iso = SU(2) × U(1) v × U(1) w (1.8) whose holomorphic algebraic action on the three coordinates u, v, w is described later in eq. (7.1). The chosen isometry group implies that the sought for Ricci-flat metric is toric, as each of the three complex coordinates is acted on by an independent U(1)-isometry. Furthermore the enhancement of one of the U(1)'s to SU (2) guarantees that either the Kähler potential K in the standard complex formulation of Kähler geometry, or the symplectic potential G, the Legendre transform of the former appearing in the available symplectic formalism [60], are functions only of two invariant real variables (see section 6 and 9.1). Assuming that we possess either one of these two real functions for the Ricci-flat metric 3 : we can reduce the corresponding geometry to that of the exceptional divisor by setting a section of the Y Γ [3] bundle to zero as: (1.10) The fascinating scenario we have alluded to some lines above is encoded in the following: 3 For conventions see once again sections 9.1 and 6. Figure 1: A conceptual picture of the resolved three-fold Y Γ [3] displaying its double fribration structure. The orange sphere in the middle symbolizes the base manifold of the bundle ED [2] . A dense complex coordinate patch for this P 1 is named u in the main body of the article. The blueish spheres around the orange one symbolize the P 1 fibers of ED [2] . A dense complex coordinate patch for these fibers is named v in the main body of the article. Finally the greenish rays enveloping the divisor ED [2] symbolize the noncompact fibers of the bundle Y Γ [3] . A dense coordinate patch for these fibers is named w in the main body of the article. Kro [Y Γ [3] ] on the line bundle (1.6) and the Ricci-flat one ds 2 Ricf lat [Y Γ [3] ] on the same manifold, that has the same isometries and is asymptotically locally flat 4 are different, yet they coincide on the exceptional divisor ED. The basic argument in favor of this conjecture is provided by an in depth analysis of a particular orthotoric metric that we construct in this paper and that is shown to describe the Ricci-flat metric on a degenerate limit of three-fold Y Γ [3] , as described in [7]. This is a partial resolution of the C 3 /Z 4 singularity and it occurs when the stability parameters (Fayet-Iliopolous parameters in the physics jargon) are restricted to be on the unique type III wall 5 appearing in the chamber structure associated with the generalized Kronheimer construction for this McKay quiver. From the algebraic geometry viewpoint, this variety Y [3] is the total space of the canonical bundle over the weighted projective space WP [112]: (1.11) and its exceptional divisor is WP [112]. We show that the Kähler metric induced on WP [112] by our new Ricci-flat orthotoric metric is precisely identical with that obtained from the Kronheimer construction once reduced to the divisor. The various inspections of this known case within the framework of different formalisms and using different coordinate patches provided us with the means to make conjecture 1.1 more robust. The main tool at our disposal is provided by the Monge-Ampère (MA) equation for Ricci-flatness of which we develop two versions, one in terms of the Kähler potential K( , f) (see section 9.1) and one in terms of the symplectic potential 6 G(v, w)(see section 9.2). In both cases we showed that the potential can be developed in power series of the invariant associated with the non compact fibers (either f or w − 3 2 ) and that the MA equation imposes no restriction on the 0-th order potentials K 0 ( ) or G 0 (v), namely on the geometry chosen for the exceptional divisor. Rather, dealing carefully with the boundary conditions, we discovered that in both cases the MA equation completely determines all the other terms once K 0 ( ) or G 0 (v) are given. Hence we can start with K Kro 0 ( ) or G Kro 0 (v) as they are determined by the Kronheimer construction and going through the power series treatment of the MA equation we can construct a corresponding Ricci-flat metric. The only question which remains open is whether this Ricci-flat metric is asymptotically locally flat. In the case of totK WP [112] it is. This supports the conjecture. In order to transform the conjecture into a theorem one should first resum the series and study the metric at large distances. In this respect our study of the symplectic potential produced encouraging results. First of all we were able to construct an explicit form G WP [112] (v, w) of such potential for the orthotoric case. The function G WP [112] (v, w), which is relatively simply written in terms of elementary transcendental functions, satisfies the MA equation and can be expanded in series of (w − 3 2 ). The remarkably similar behavior of the series truncations of the exact solution corresponding to totK WP [112] with the same truncations of the series determined by the MA equation for the smooth case totK F 2 suggests that also in the latter case there exists a summation of the series to some simple deformation of the function G WP [112] (v, w). We postpone to future publications further attempts to sum the series solution and prove, if possible, our conjecture. 2 D3-brane supergravity solutions on resolved C 3 /Γ singularities An apparently general property of the Y Γ [3] manifolds that emerge from the crepant resolution construction, at least when Γ is abelian and cyclic is the following. The non-compact Y Γ [3] corresponds to the total space of some line-bundle over a complex two-dimensional compact base manifold M 2 : According with this structure we name u, v, w the three complex coordinates of Y Γ [3] , u, v being the coordinates of the base manifold M 2 and w being the coordinate spanning the fibers. We will use the same names also in more general cases even if the interpretation of w as fiber coordinate will be lost. Hence we have: An important observation which ought to be done right at the beginning is that other Kähler metrics g αβ do exist on the three-fold Y [3] that are not Ricci-flat, although the cohomology class of the associated Kähler form K can be the same as the cohomology class of K RFK . Within the framework of the generalized Kronheimer construction, among such Kähler (non-Ricci flat) metrics we have the one determined by the Kähler quotient according to the formula of Hithchin, Karlhede, Lindström and Roček [30]. Indeed, as we show later in explicit examples, the Kähler metric: which emerges from the mathematical Kähler quotient construction and which is naturally associated with Y [3] when this latter is interpreted as the space of classical vacua of the D3-brane gauge theory (set of extrema of the scalar potential), is generically non Ricci-flat. On the other hand on the supergravity side of the dual D3-brane pair we need the Ricci-flat metric in order to construct a bona-fide D3-brane solution of type IIB supergravity. In particular, calling Y Γ [3] the crepant resolution of the C 3 /Γ singularity, admitting a Ricci-flat metric, we can construct a bona-fide D3 brane solution which is solely defined by a single real function H on Y Γ [3] , that should be harmonic with respect to the Ricci-flat metric, namely: Indeed the function H(y) is necessary and sufficient to introduce a flux of the Ramond 5-form so as to produce the splitting of the 10-dimensional space into a 4-dimensional world volume plus a transverse 6-dimensional space that is identified with the three-fold Y Γ [3] . This is the very essence of the D3-picture. Yet there is another essential item that was pioneered in [31,32,33] namely the consistent addition of fluxes for the complex 3-forms H ± that appear in the field content of type IIB supergravity. These provide relevant new charges on both sides of the gauge/gravity correspondence. In [34,35] such fluxes were constructed explicitely relying on a special kind of three-fold: where ALE Γ denotes one of the ALE-manifolds constructed by Kronheimer [15,16] as HyperKähler quotients resolving the singularity C 2 /Γ with Γ ⊂ SU(2) a finite Kleinian subgroup. As we explain in detail below, the essential geometrical feature of Y [3] , required to construct consistent fluxes of the complex 3-forms H ± , is that Y [3] should admit imaginary (anti)-self-dual, harmonic 3-forms Ω (2,1) , which means: and simultaneously: Since the Hodge-duality operator involves the use of a metric, we have been careful in specifying that (anti)-self-duality should occur with respect to the Ricci-flat metric that is the one used in the rest of the supergravity solution construction. The reason why the choice (2.5) of the three-fold allows the existence of harmonic anti-self dual 3forms is easily understood recalling that the ALE Γ -manifold obtained from the resolution of C 2 /Γ has a compact support cohomology group of type (1, 1) of the following dimension: Naming z ∈ C the coordinate on the factor C of the product (2.5) and ω (1,1) I a basis of harmonic anti-self dual one-forms on ALE Γ , the ansatz utilized in [34,35] to construct the required Ω (2,1) was the following: where f I (z) is a set of holomorphic functions of that variable. As it is well known r is also the rank of the corresponding Lie Algebra in the ADE-classification of the corresponding Kleinian groups and the 2-forms ω (1,1) I can be chosen dual to a basis homology cycles C I spanning H 2 (ALE Γ ), namely we can set: The form Ω (2,1) is closed by construction: dΩ (2,1) = 0 (2.11) and it is also anti-selfdual with respect to the Ricci-flat metric: Hence the question whether we can construct sufficiently flexible D3-solutions of supergravity with both 5-form and 3-form fluxes depends on the nontriviality of the relevant cohomology group: and on our ability to find harmonic (anti)-self dual representatives of its classes (typically not with compact support and hence non normalizable). At this level we find a serious difficulty. It seems therefore that we are not able to find the required Ω (2,1) forms on Y Γ [3] and that no D3-brane supergravity solution with 3-form fluxes can be constructed dual to the gauge theory obtained from the Kronheimer construction dictated by Γ ⊂ SU(3). Fortunately, the sharp conclusion encoded in eq. (1.5) follows from a hidden mathematical assumption that, in physical jargon, amounts to a rigid universal choice of the holomorphic superpotential W(Φ). Under appropriate conditions that we plan to explain and which are detectable at the level of the McKay quiver diagram, the superpotential can be deformed (mass deformation) yielding a family of three-folds Y Γ,µ [3] which flow, for limiting values of the parameter (µ → µ 0 ) to a three-fold Y Γ,µ 0 [3] admitting imaginary anti self-dual harmonic (2,1)-forms. Since the content and the interactions of the gauge theory are dictated by the McKay quiver of Γ and by its associated Kronheimer construction, we are entitled to see its mass deformed version and the exact D3-brane supergravity solution built on Y Γ,µ 0 [3] as dual to each other. This will be the object of a future work. Here we begin with an accurate mathematical summary of the construction of D3-brane solutions of type IIB supergravity using the geometric formulation of the latter within the rheonomy framework [36]. Geometric formulation of Type IIB supergravity In order to discuss conveniently the D3 brane solutions of type IIB that have as transverse space the crepant resolution of a C 3 /Γ singularity, we have to recall the geometric Free Differential Algebra formulation of the chiral ten dimensional theory fixing with care all our conventions, which is not only a matter of notations but also of principles and geometrical insight. Indeed the formulation of type IIB supergravity as it appears in string theory textbooks [37,38] is tailored for the comparison with superstring amplitudes and is quite appropriate to this goal. Yet, from the viewpoint of the general geometrical set up of supergravity theories this formulation is somewhat unwieldy. Specifically it neither makes the SU(1, 1)/U(1) coset structure of the theory manifest, nor does it relate the supersymmetry transformation rules to the underlying algebraic structure which, as in all other instances of supergravities, is a simple and well defined Free Differential algebra. The Free Differential Algebra of type IIB supergravity was singled out many years ago by Castellani in [39] and the geometric, manifestly SU(1, 1)-covariant formulation of the theory was constructed by Castellani and Pesando in [40]. Their formulae and their transcription from a complex SU(1, 1) basis to a real SL(2, R) basis were summarized and thoroughly explained in a dedicated chapter of a book authored by one of us [41] which we refer the reader to. The D3-brane solution with a Y [3] transverse manifold In this section we discuss a D3-brane solution of type IIB supergravity in which, transverse to the brane world-manifold, we place a smooth non compact three-fold Y [3] endowed with a Ricci-flat Kähler metric. The ansatz for the D3-brane solution is characterized by two kinds of flux; in addition to the usual RR 5-form flux, there is a non-trivial flux of the supergravity complex 3-form field strengths H ± . The D3 brane ansatz We make the following ansatz for the metric 7 : ds 2 [10] = H(y y y, y y y) − 1 2 (−η µν dx µ ⊗ dx ν ) + H(y y y, y y y) [10] ) = H(y y y, y y y)det(g RFK ) where g RFK is the Kähler metric of the Y [3] manifold the real function K RFK (y y y, y y y) being a suitable Kähler potential. Elaboration of the ansatz In terms of vielbein the ansatz (2.2) corresponds to where e are the vielbein 1-forms of the manifold Y [3] . The structure equations of the latter are 8 : The relevant property of the Y metric that we use in solving Einstein equations is that it is Ricci-flat: What we need in order to derive our solution and discuss its supersymmetry properties is the explicit form of the spin connection for the full 10-dimensional metric (2.2) and the corresponding Ricci tensor. From the torsion equation one can uniquely determine the solution: As explained in appendix A, the conventions for the gamma matrices and the spinors are set with a mostly minus metric dτ 2 . In the discussion of the solution, however, we use ds 2 = −dτ 2 for convenience. We hope this does not cause any confusion. 8 The hats over the spin connection and the Riemann tensor denote quantities computed without the warp factor. Inserting this result into the definition of the curvature 2-form we obtain 9 : where for any function f (y y y, y y y) with support on Y [3] : denotes the action on it of the Laplace-Beltrami operator with respect to the metric (2.3) which is the Ricci-flat one: we have omitted the superscript RFK just for simplicity. Indeed on the supergravity side of the correspondence we use only the Ricci-flat metric and there is no ambiguity. Analysis of the field equations in geometrical terms The equations of motion for the scalar fields ϕ and C [0] and for the 3-form field strength F N S [3] and F RR [3] can be better analyzed using the complex notation. Defining, as we did above: (2.10) (2.11) eq.s (2.12)-(2.13) can be respectively written as: while the equation for the 5-form becomes: Besides assuming the structure (2.2) we also assume that the two scalar fields, namely the dilaton ϕ and the Ramond-Ramond 0-form C [0] are constant and vanishing: As we shall see, this assumption simplifies considerably the equations of motion, although these two scalar fields can be easily restored [33]. 9 The reader should be careful with the indices. Latin indices are always frame indices referring to the vielbein formalism. Furthermore we distinguish the 4 directions of the brane volume by using Latin letters from the beginning of the alphabet while the 6 transversal directions are denoted by Latin letters from the middle and the end of the alphabet. For the coordinate indices we utilize Greek letters and we do exactly the reverse. Early Greek letters α, β, γ, δ, . . . refer to the 6 transverse directions while Greek letters from the second half of the alphabet µ, ν, ρ, σ, . . . refer to the D3 brane world volume directions as it is customary in D = 4 field theories. The three-forms The basic ansatz characterizing the solution and providing its interpretation as a D3-brane with three-form fluxes is described below. The ansatz for the complex three-forms of type IIB supergravity is given below and is inspired by what was done in [35,34] in the case where Y [3] = C × ALE Γ : where Ω (2,1) is localized on Y [3] and satisfies eq.s (2.6-2.7) If we insert the ansätze (2.15,2.16) into the scalar field equation (2.12) we obtain: This equation is automatically satisfied by our ansatz for a very simple reason that we explain next. The form H + is by choice a three-form on Y [3] of type (2, 1). Let Θ [3] be any three-form that is localized on the transverse six-dimensional 10 manifold Y [3] : When we calculate the Hodge dual of Θ [3] with respect to the 10-dimensional metric (2.2) we obtain a 7-form with the following structure: where: is the volume-form of the flat D3-brane and Θ [3] ≡ g Θ [3] (2.21) is the dual of the three-form Θ [3] with respect to the metric g defined on Y [3] . Let us now specialize the three-form Θ [3] to be of type (2, 1): As shown in [31,32], preservation of supersymmetry requires the complex three-form H + to obey the condition 11 Hence: The self-dual 5-form Next we consider the self-dual 5-form F RR [5] which by definition must satisfy the following Bianchi identity: Our ansatz for F RR [5] is the following: F RR [5] = α (U + 10 U ) (2.26) where α is a constant to be determined later. By construction F RR [5] is self-dual and its equation of motion is trivially satisfied. What is not guaranteed is that also the Bianchi identity (2.25) is fulfilled. Imposing it, results into a differential equation for the function H (y y y, y y y). Let us see how this works. Starting from the ansatz (2.27) we obtain: Calculating the components of the dual form 10 U we find that they are non vanishing uniquely in the six transverse directions: The essential point in the above calculation is that all powers of the function H exactly cancel so that 10 U is linear in the H-derivatives 12 . Next using the same coordinate basis we obtain: = α 2 g H(y y y, y y y) × Vol Y [3] (2.31) where: is the volume form of the transverse six-dimensional space. Once derived with the use of real coordinates, the relation (2.31) can be transcribed in terms of complex coordinates and the Laplace-Beltrami operator 2 g can be written as in eq. (2.9). Let us now analyze the source terms provided by the three-forms. With our ansatz we obtain: J (y y y, y y y) = − This is the main differential equation to which the entire construction of the D3-brane solution can be reduced to. We are going to show that the parameter α is determined by Einstein's equations and fixed to α = 1. The equations for the three-forms Let us consider next the field equation for the complex three-form, namely eq. (2.13). Since the two scalar fields are constant the SU(1, 1)/O(2) connection vanishes and we have: Using our ansatz we immediately obtain: Hence if α = 1, the field equations for the three-form reduces to: which are nothing else but eq.s (2.6-2.7). In other words the solution of type IIB supergravity with threeform fluxes exists if and only if the transverse space admits closed and imaginary anti-self-dual forms Ω (2,1) as we already stated 13 . In order to show that also the Einstein's equation is satisfied by our ansatz we have to calculate the (trace subtracted) stress energy tensor of the five and three index field strengths. For this purpose we need the components of F RR [5] . These are easily dealt with. Relying on the ansatz (2.27) and on eq. (2.4) for the vielbein we immediately get: where: Then by straightforward algebra we obtain: Inserting eq.s (2.40) and (2.8) into Einstein's equations: (2. 41) we see that they are satisfied, provided α = 1 (2.42) and the master equation (2.34) is satisfied. This concludes our proof that an exact D3-brane solution with a Y transverse space does indeed exist. 3 An example without mass deformations and no harmonic Ω (2,1) : In [6] as a master example of the generalized Kronheimer construction of crepant resolutions the following case was considered: the action of the group Z 3 ⊂ SU(3) on the three-complex coordinates {x, y, z} being generated by the matrix: Following the steps of the construction one arrives at the following nine-dimensional flat Kähler manifold where Q is the three dimensional representation of Z 3 generated by g, while R denotes the regular representation. The points of S Z 3 are identified with the following triplet of matrices of 3 × 3 matrices: The nine complex coordinates of S Z 3 are the matrix entries Φ A,B,C . With reference to the quiver diagram of fig.2 which is dictated by the McKay matrix A ij appearing in the decomposition 14 : , . . . are interpreted as the complex scalar fields of as many Wess-Zumino multiplets in the bifundamental of the U i (N) groups mentioned in the lower suffix. In the case of a single brane (N=1) the quiver group G Z 3 has the following structure: and its maximal compact subgroup F Z 3 ⊂ G Z 3 is the following: The gauge group F Z 3 and its complexification F Z 3 are embedded into SL(3, C) by defining the following two generators: and setting: The HKLR Kähler potential The Kähler potential of the linear space S Z 3 , which in the D3-brane gauge theory provides the kinetic terms of the nine scalar fields Φ A,B,C 1,2,3 is given by: where the three matrices A, B, C are those of equation (3.4). According with the principles of the Kronheimer construction, the superpotential is given by W (Φ) = const × Tr ([A, B] C). The final HKLR Kähler metric, whose determination requires two steps of physical significance: 1. Reduction to the critical surface of the superpotential i.e. ∂ Φ W = 0 2. Reduction to the level surfaces of the gauge group moment maps by solving the algebraic moment map equations, was calculated in [6] according with the general theory there summarized, which is originally due to the authors of [30]. The final form of HKLR Kähler potential is provided by: where, as it was extensively discussed in [7], the coefficient α might be adjusted, chamber by chamber, in chamber space, so as to make the periods of the tautological line bundles integer on the homology basis. Setting: where z 1,2,3 are the three complex coordinates and ζ ζ ζ = {ζ 1 , ζ 2 } the two Fayet-Iliopoulos parameters. Let us describe the explicit form of these functions. To this effect let us name ζ 1 = p, ζ 2 = q, and let us introduce the following blocks: then we have: The issue of the Ricci-flat metric One main question is whether the metric arising from the Kähler quotient, which is encoded in eq. (3.11) is Ricci-flat. A Ricci-flat metric on the crepant resolution of the singularity C 3 /Z 3 , namely on O P 2 (−3), is known in explicit form from the work of Calabi 15 [42], yet it is not a priori obvious that the metric defined by the Kähler potential (3.11) is that one. The true answer is that it is not, as we show later on. Indeed we are able to construct directly the Kähler potential for the resolution of C n /Z n , for any n ≥ 2, in particular determining the unique Ricci-flat metric on O P 2 (−3) with the same isometries as the metric (3.11) and comparing the two we see that they are different. Here we stress that the metric defined by (3.11) obviously depends on the level parameters ζ 1 , ζ 2 while the Ricci-flat one is unique up to an overall scale factor. This is an additional reason to understand a priori that (3.11) cannot be the Ricci flat metric. Actually Calabi in [42] found an easy form of the Kähler potential of a Ricci-flat metric on the canonical bundle of a Kähler-Einstein manifold, and that result applies to the cases of the canonical bundle of P 2 . However, in view of applications to cases where we shall consider the canonical bundles of manifolds which are not Kähler-Einstein, in the section we stick with our strategy of using the metric coming from the Kähler quotient as a starting point. As we have noticed above the HKLR Kähler metric defined by the Kähler potential (3.11) depends only on the variable Σ defined in eq. (3.12). It follows that the HKLR Kähler metric admits U(3) as an isometry group, which is the hidden invariance of Σ. The already addressed question is whether the HKLR metric can be Ricci-flat. An almost immediate result is that a Ricci-flat Kähler metric depending only on the sum of the squared moduli of the complex coordinates is unique (up to a scale factor) and we can give a general formula for it. We can present the result in the form of a theorem. Theorem 3.1. Let M n be a non-compact n-dimensional Kähler manifold admitting a dense open coordinate patch z i , i = 1, . . . , n which we can identify with the total space of the line bundle O P n−1 (−n), the bundle structure being exposed by the coordinate transformation: where u i is a set of inhomogenous coordinates for P n−1 . The Kähler potential K n of a U(n) isometric Kähler metric on M n must necessarily be a real function of the unique real variable Σ = n i=1 |z i | 2 . If we require that metric should be Ricci-flat, the Kähler potential is uniquely defined and it is the following one: where k is an irrelevant additive constant and > 0 is a constant that can be reabsorbed by rescaling all the complex coordinates by a factor , namely z i → z i . Proof 3.1.1. The proof of the above statement is rather elementary. It suffices to recall that the Ricci tensor of any Kähler metric g ij = ∂ i ∂ j K(z, z) can always be calculated as follows: In order for the Ricci tensor to be zero it is necessary that Det [g] be the square modulus of a holomorphic function |F (z)| 2 , on the other hand under the hypotheses of the theorem it is a real function of the real variable Σ. Hence it must be a constant. It follows that we have to impose the equation: Let K(Σ) be the sought for Kähler potential, calculating the Kähler metric and its determinant we find: Inserting eq. (3.20) into eq. (3.19) we obtain a non linear differential equation for K(Σ) of which eq. (3.17) is the general integral. This proves the theorem. ♦ Particular cases It is interesting to analyze particular cases of the general formula (3.17). The case n = 2: Eguchi-Hanson. The case n = 2 yielding a Ricci flat metric on O P 1 (−2) is the Eguchi-Hanson case namely the crepant resolution of the Kleinian singularity C 2 /Z 2 . This is known to be a HyperKähler manifold and all HyperKähler metrics are Ricci-flat. Hence also the HKLR metric must be Ricci-flat and identical with the one defined by eq. (3.17). Actually we find: which follows from the identification of the hypergeometric function with combinations of elementary transcendental functions occurring for special values of its indices. The second transcription of the function is precisely the Kähler potential of the Eguchi-Hanson metric in its HKLR-form as it arises from the Kronheimer construction (see for instance [6]). The case n = 3: O P 2 (−3). The next case is that of interest for the D3-brane solution. For n = 3, setting = 1, which we can always do by a rescaling of the coordinates, we find: The second way of writing the Kähler potential follows from one of the standard Kummer relations among hypergeometric functions. There is a third transcription that also in this case allows to write it in terms of elementary transcendental functions. Before considering it we use eq. (3.22) to study the asymptotic behavior of the Kähler potential for large values of Σ. We obtain; Eq.(3.23) shows that the Ricci-flat metric is asymptotically flat since the Kähler potential approaches that of C 3 . As anticipated, there is an alternative way of writing the Kähler potential (3.22) which is the following: The identity of eq. (3.24) with eq. (3.22) can be worked with analytic manipulations that we omit. The representation (3.24) is particularly useful to explore the behavior of the Kähler potential at small values of Σ. We immediately find that: The behavior of K Rf lat (Σ) is displayed in fig.3. The harmonic function in the case Let us now consider the equation for a harmonic function H(z, z) on the background of the Ricci-flat metric of Y [3] that we have derived in the previous sections. Once again we suppose that H = H(Σ) is a function only of the real variable Σ, viz. R = √ Σ. For the Ricci-flat metric the Laplacian equation takes the simplified form: ∂ i g ij ∂ j H(Σ) = 0, since the determinant of the metric is constant. Using the Kähler metric that follows from the Kähler potential K Rf lat (Σ) defined by eq.s (3.22),(3.24), we obtain a differential equation that upon the change of variable Σ = 3 √ r takes the following form: The general integral eq. (3.26) is displayed below: κ, λ being the two integration constants. We fix these latter with boundary conditions. We argue in the following way: if the transverse space to the brane were the original C 3 /Z 3 instead of the resolved variety O P 2 (−3), then the harmonic function describing the D3-brane solution would be the following: The asymptotic identification for R → ∞ of the Minkowski metric in ten dimension would be guaranteed, while at small values of R we would find (via dimensional transmutation) the standard AdS 5 -metric times that of S 5 (see the following eq.s (3.33) and (3.34)). In view of this, naming R the square root of the variable Σ, we fix the coefficients κ, λ in the harmonic function H res (R) in such a way that for large values of R it approaches the harmonic function pertaining to the orbifold case (3.28). The asymptotic expansion of the function: H res (R) ≡ C(r 6 ) is the following one: Hence the function H res (R) approximates the function H orb (R) if we set κ = 2 , λ = π √ 3 . In this way we conclude that: The overall behavior of the function H res (R) is displayed in fig.4. The asymptotic limits of the Ricci-flat metric for the D3-brane solution on In the case of a standard D3-brane on Y [3] = C 3 R 6 one writes the same ansatz as in eq. (2.2) and (2. 26-2.27) where now the Kähler metric is g αβ = δ αβ Rewriting the complex coordinates in terms of polar coordinates z 1 = e iϕ 1 R cos φ, z 2 = e iϕ 2 R cos χ sin φ, z 3 = e iϕ 3 R sin χ sin φ we obtain that: where: is the SO(6)-invariant metric of a 5-sphere in polar coordinates. In other words the Ricci-flat Kähler metric ds 2 C 3 (which is also Riemann-flat) is that of the metric cone on the Sasaki-Einstein metric of S 5 . At the same time the SO(6)-invariant harmonic function on C 3 is given by the already quoted H orb (R) in (3.28), and the complete 10-dimensional metric of the D3-brane solution takes the form: For R → ∞ the metric (3.33) approaches the flat Minkowski metric in d = 10, while for R → 0 it approaches the following metric: Let us now cosider the asymptotic behavior of the Ricci-flat metric on O P 2 (−3). In order to obtain a precise comparison with the flat orbifold case the main technical point is provided by the transcription of the S 5 -metric in terms of coordinates well adapted to the Hopf fibration: To this effect let Y = {u, v} be a pair of complex coordinates for P 2 such that the standard Fubini-Study metric on this compact 2-fold is given by: the corresponding Kähler 2-form being K P 2 = i 2π g P 2 ij dY i ∧ dY j . Introducing the one form: whose exterior derivative is the Kähler 2-form, dΩ = 2π K P 2 , the metric of the five-sphere in terms of these variables is the following one: where the range of the coordinate ϕ spanning the S 1 fiber is ϕ ∈ [0, 2π]. In this way the flat metric on the metric cone on S 5 , namely (3.31) can be rewritten as follows: Comparison of the Ricci-flat metric with the orbifold metric In order to compare the exact Ricci-flat metric streaming from the Kähler potential (3.22) with the metric (3.12) it suffices to turn to toric coordinates The toric coordinates {u, v} ≡ Y span the exceptional divisor P 2 while w is the fiber coordinate in the bundle. Setting: we obtain: From eq. (3.41) we derive the asymptotic form of the metric for large values of R, namely: The only difference between eq. (3.38) and eq. (3.42) is the range of the angular value ϕ = ψ 3 . Because of the original definition of the angle ψ, the new angle ϕ ∈ 0, 2π 3 takes one third of the values. This means that the asymptotic metric cone is quotiened by Z 3 as it is natural since we resolved the singularity Reduction to the exceptional divisor The other important limit of the Ricci-flat metric is its reduction to the exceptional divisor ED. In the present case the only fixed point for the action of Γ = Z 3 on C 3 is provided by the origin z 1,2,3 = 0 which, comparing with eq. (3.39), means w = 0 ⇒ f = 0. This is the equation of the exceptional divisor which is created by the blowup of the unique singular point. In the basis of the complex toric coordinates Y i ≡ {u, v, w}, the Kähler metric derived from the Kähler potential (3.22) has the following appearance: where the invariants f, are defined in equation (3.39). Hence the reduction of the metric to the exceptional divisor is obtained by setting dw = dw = 0 in the line element ds 2 Rf lat ≡ g Rf lat ij dY i dY j and performing the limit f → 0 on the result. We obtain: which is the standard Fubini-Study metric on P 2 obtained from the Kähler potential: As we see, the metric on the exceptional divisor obtained from the Ricci-flat metric has no memory of the Fayet Iliopoulos (or stability parameters) p, q which characterize instead the HKLR metric obtained from the Kronheimer construction. This is obvious since the Ricci-flat metric does not depend on p, q. On the other hand the HKLR metric, that follows from the Kähler potential (3.11), strongly depends on the Fayet Iliopoulos parameters ζ 1 = p , ζ 2 = q and one naturally expects that the reduction of ds 2 HKLR to the exceptional divisor will inherit such a dependence. Actually this is not the case since the entire dependence from p, q of the HKLR Kähler potential, once reduced to ED, is localized in an overall multiplicative constant and in an irrelevant additive constant. This matter of fact is conceptually very important in view of our conjecture that the Ricci-flat metric is completely determined, by means of the Monge-Ampère equation, from the Kähler metric on the exceptional divisor, as it is determined by the Kronheimer construction. In the present case where, up to a multiplicative constant, i.e. a homothety there is only one Ricci-flat metric on O P 2 (−3) with the prescribed isometries, our conjecture might be true only if the reduction of the HKLR metric to the exceptional divisor is unique and p, q-independent, apart from overall rescalings. It is very much reassuring that this is precisely what actually happens. 4 The case Y → C 3 /Z 4 and the general problem of determining a Ricciflat metric The next case of interest to us at present is the resolution Y → C 3 /Z 4 whose associated Kronheimer construction was studied in detail in [7]. (A study of C 3 /Z 4 as a non-complete intersection affine variety in C 9 is presented in the Appendix.) The corresponding MacKay quiver is displayed in fig.5. Differently from the case of the resolution Y → C 3 /Z 3 studied in section 3, here the HKLR Kähler metric cannot be derived explicitly since the moment map equations form a system of algebraic equations of higher degree. Yet as it was explained in [7] one can work out the restriction of such metric to the compact component of the exceptional divisor which is the second Hirzebruch surface F 2 . Indeed it was shown that the quotient singularity C 3 /Z 4 can be completely resolved by totK F 2 [7], that denotes the total space of the canonical bundle over the second Hirzebruch surface. Hence the main goal we would like to achieve is the construction of a Ricci-flat Kähler metric on totK F 2 which restricted to the base F 2 of the bundle hopefully coincides with Kähler metric on the same surface provided by the Kronheimer construction. Being a non-compact Calabi-Yau variety the existence of a Ricci-flat Kähler metric on totK F 2 is not implied by the classic Yau theorem, valid for smooth compact manifolds. To ask whether Ricci-flat metrics do exist, one has to specify boundary conditions. We will be interested in metrics that, just as in the previous example, are asymptotically conical, namely of the form 16 Figure 5: The quiver diagram describing the C 3 /Z 4 singular quotient and codifying its resolution via Kähler quotientà la Kronheimer. The same quiver diagram codifies the construction of the corresponding gauge theory for a stack of D3-branes. Each node is associated with one of the 4 irreducible representations of Z 4 and in each node we located one of the U i (1) groups with respect to which we perform the Kähler quotient. This is the case of one D3-brane. For N D3-branes, all gauge groups U i (1) are promoted to U i (N). for a suitable radial coordinate approaching R → ∞. Essentially by definition, ds 2 (X 5 ) is a Sasaki-Einstein metric on a compact manifold (or orbifold) X 5 . Then we fix the boundary conditions for our metric by requiring that asymptotically it approaches the cone over S 5 /Z 4 . With this boundary condition 17 the theorems in [28] imply the existence of a unique Ricci-flat Kähler metric in every Kähler class of the resolved variety Y . Analogous existence results for isolated quotient singularities C m /Γ were given in [44] and later extended in [45] and [46] for crepant resolutions of general isolated conical singularities. See also [47] for of applications of the general existence results in the toric context, including the resolution of the conical singularities on the Y p,q Sasaki-Einstein five-manifolds [48]. The existence results are analogous to Yau's theorem in the compact case. In fact, recently there has been some renewed interest and activity in this area, with some new results concerning for example the existence of Sasaki-Einstein manifolds, outside the toric realm. These results are related to the idea of "stability". For reference, recent work on this subject include [49,50]. For many purposes, knowledge of the existence of a metric, together with some of its key properties, can be sufficient for extracting interesting physical information. This is true also in the case of the AdS/CFT correspondence. However, if one is interested in constructing the metrics explicitly, namely write them down in some coordinate systems, then the existence theorems are not helpful, because they are not constructive (as far as we know). The classic examples of explicit Ricci-flat Kähler metrics in real dimension four include Eguchi-Hanson, Gibbons-Hawking, Taub-NUT, Atiyah-Hitchin. In real dimension six, for a long time the resolved and deformed metrics on the conifold singularity constructed by Candelas and de la Ossa [51] were the only (non trivial) known examples of explicit Ricci-flat Kähler metrics. The so-called "resolved conifold" metric is a metric on the total space of the vector bundle O(−1) ⊕ O(−1) → P 1 , the isometry group is SU(2) × SU(2) × U(1) and asymptotically it approaches the cone over the Sasaki-Einstein manifold T 1,1 (with the same isometry). In other cases, different kind of resolutions exist, where instead of a P 1 one replaces the singularity with a compact four-dimensional manifold (or orbifold) M 4 . A general ansatz that yields explicit Ricci-flat Kähler metrics was constructed by Page and Pope (in any dimension) [52], but this is somewhat limited as it assumes that the metric induced on M 4 is Kähler-Einstein. 17 The results in [28] require some more precise estimate on the fall-off of the metric at infinity. Explicit Kähler-Einstein metrics on smooth four-dimensional manifolds are known only for M 4 = P 2 and M 4 = P 1 × P 1 . The former leads to the construction of an explicit Ricci-flat Kähler metric on the total space of O P 2 (−3) totK P 2 , which is the resolution of the quotient singularity C 3 /Z 3 and was fully described in section 3 (see also [42]). The latter leads to the construction of an explicit Ricci-flat Kähler metric on the total space of totK P 1 ×P 1 , which is the resolution of the conical singularity (conifold)/Z 2 . The corresponding Sasaki-Einstein manifolds at infinity are, respectively, S 5 /Z 3 and T 1,1 /Z 2 . For the case of totK P 1 ×P 1 , a generalisation was constructed [53], namely an explicit Ricci-flat Kähler metric that depends on the two independent Kähler classes parameters: this construction however uses the SU(2) × SU(2) × U(1) symmetry and as a result the metric is co-homogeneity one, although it does not fit in the ansatz of [42] and [52]. Recently, the ansatz of [42,52] was used to produce explicit Ricci-flat Kähler metrics on the canonical bundle of generalised flag manifolds [54]. Extensions that include the dependence on several Kähler class parameters have appeared in [55,56]. The Ricci-flat Kähler metric on totK F 1 The metric that we shall present in the sequel has some distinctive features that are shared with an explicit Ricci-flat Kähler metric on totK F 1 , where F 1 is the first Hirzebruch surface, i.e., the first del Pezzo surface dP 1 , constructed in [57]. This metric is many ways "more complicated" than all the other metrics mentioned above. Let us summarise some of its salient properties: 1. Asymptotically it approaches the cone over the Sasaki-Einstein manifold 18 Y 2,1 . The isometry group is SU 3. It is cohomogeneity two. In particular, there is a homogeneous base, given by a round P 1 , and then the metric depends non-trivially on two coordinates. 4. It is toric, in that there is a U(1) 3 ∈ SU(2) × U(1) × U(1) subgroup of isometries that leaves invariant the Kähler form, and contains the torus of the toric three-fold totK F 1 . This group allows one to introduce three moment map coordinates and three angular coordinates ("action-angle" coordinates system). 5. It also possesses an additional "hidden symmetry" corresponding to the existence of a so-called Hamiltonian two-form [14], that implies the existence of a coordinate system (called "orthotoric") in which the metric components are all given in terms of functions of one variable. 6. Imposing this extra symmetry however, comes at the price of loosing one of the two Kähler class parameters. Indeed it was later demonstrated in [29] that the two-parameter metric (that is known to exist thanks to the general theorems of [45,46]) does not posses such Hamiltonian two-form. 7. The metric induced on exceptional divisor M 4 = F 1 is obviously Kähler, but it is not Einstein. Indeed, a Kähler-Einstein metric on F 1 does not exist. 8. In [14] (further explored in detail in [59]) it was shown that this metric is part of a family of (in general only partial 19 ) resolutions of the conical Ricci-flat metrics on the whole family of Y p,q Sasaki-Einstein manifolds. 9. In [29] it is given a relation between the orthotoric coordinates and a set of complex coordinates that is well adapted to the complex structure of totK F 1 , with one complex coordinate on the non-compact fiber C, one coordinate on the fiber P 1 in F 1 and one coordinate on the base P 1 in F 1 . 10. A set of local complex coordinates explicitly related to the orthotoric coordinates was given in section 2.2 of [14]. It would be interesting to work out the relation between these and the complex coordinates defined in [29]. Since, similarly to F 1 , also F 2 does not admit a Kähler-Einstein metric, the Ricci-flat metric on totK F 2 cannot be found through the Calabi ansatz [42,52]. We expect the Ricci-flat metric on totK F 2 to share many features with that on totK F 1 , summarised above. One difference is that at infinity it must approach the cone over the Sasaki-Einstein orbifold S 5 /Z 4 , as opposed to the cone over the Sasaki-Einstein manifold Y 2,1 . The Ricci-flat metric on totK F 2 will also be toric and moreover it should have again isometry group SU(2) × U(1) × U(1). This immediately implies that the metric should be co-homogeneity two and in practice it leads to PDE's in two variables. For example, one can write the Monge-Ampere equation for the Kähler potential as a PDE in two variables, or similarly the corresponding equation for the symplectic potential. Without further assumptions, these equations are unlikely to be solvable in closed form. A natural assumption to make is that the metric admits a Hamiltonian two-form, namely that it can be put in the orthotoric form. This is natural because the partial resolution of all the Y p,q singularities arise in this ansatz, with p = 2, q = 1 giving the complete resolution above. Strictly speaking the p > q > 0 should hold, however, it is known that by performing a scaling limit of the Y p,q Sasaki-Einstein metrics, one can recover the limiting cases Y p,p = S 5 /Z 2p and Y p,0 = T 1,1 /Z p , suggesting that the partial resolution metrics may also be extended to these regimes of parameters 20 . A general set up for a metric ansatz with separation of variables In the sequel we begin by considering a metric on a 6-dimensional manifold M 6 which is Kähler and by construction admits SU(2)×U(1)×U(1) as an isometry group. This metric depends on two functions Υ(s) and P (t) of two real coordinates s, t invariant with respect to the isometry group. The other coordinates are four angles, with ranges and periodicities specified according with the following summary table: The metric, which is defined by means of the following vielbein is derived, by generalization, from the orthotoric metrics discussed in 21 [59,14] where the relation of latter with the metrics on Sasakian 5-manifolds Y p,q is also presented. Although in those references it was assumed that p > q, presently we will consider setting p = q = 2 and show that this yields an orthotoric metric that we shall identify as a Ricci-flat Kähler metric on totK WP [112] . The asymptotic metric corresponds to a cone over the limiting case Y 2,2 = S 5 /Z 4 of the Sasaki-Einstein manifolds Y p,q [48]. The line-element: is Kählerian by construction since it admits the following closed Kähler 2-form: Indeed K ort is closed by construction and it is a Kähler 2-form since we have: where: is an antisymmetric tensor which squares to minus the identity, namely it is a frame-index complex structure tensor. It should be noted that the Kähler form in eq. (5.5) is independent from the two functions Υ(s) and P (t), namely it is universal for an entire class of metrics. 21 In particular, see the line element (4.1) in [59], after correcting some typos in that expression. The relation to our coordinates is given by t = y − 1, s = x − 1. Moreover, we have θ here = θ there , φ here = φ there , as well as χ here = τ there , τ here = 2ψ there + 2 3 τ there . The orthotoric metric on K WP [112] Within the general scope of the above described setup we have that the metric (5.4) is Ricci-flat for the following choice of the two functions parameterizing the line-element: With the choice (5.8), from eq. (5.4) we obtain: The reason for the subscript totK WP [112] is that the Ricci flat metric (5.9) turns out to be defined over the total space of the canonical bundle of the (singular) projective space WP [112] namely on totK WP [112] . It is a simple matter to verify that asymptotically, for s → −∞, the metric (5.9) is indeed approximatively conical, and therefore Quasi-ALE [28]. To see this, one can set s = − 2 3 R 2 , so that at leading order in R. Since the metric is Ricci-flat Kähler, and it takes the form of a cone over a fivedimensional space, it follows that locally the five-dimensional metric ds 2 X 5 is a Sasaki-Einstein metric. In appendix B we discuss the metric ds 2 X 5 in more detail, showing that X 5 = S 5 /Z 4 , with a specific Z 4 action. As we will show in sect. 8, the metric induced by (5.9) on the exceptional divisor WP [112] is the same as the one obtained on that space while resolving a C 3 /Z 4 orbifold singularity by means of the Kronheimer construction localized on the unique type III wall W 2 displayed by its chamber structure (see sect. 6.4 of [7]). Integration of the complex structure and the complex coordinates In their algebraic geometry description, the varieties of the type here considered are complex threefolds K 3 that are canonical bundles of some compact Kähler two-fold D 2 which, on its turn, is the total space of a line-bundle over P 1 : This hierarchical structure implies a hierarchy in the complex coordinates that can be organized and named in the following way according with the nomenclature of [7]: u = coordinate on the P 1 base of D 2 ; v = coordinate on the fibers of D 2 w = coordinate on the fibers of K 3 (5.12) This structure is reflected in the integration of the complex structure that can be deduced from the combination of the Kähler 2-form with the metric. The path to the integration Indeed, having the metric and the Kähler form we can construct the complex structure tensor. Then we try to integrate the complex structure we have found. This is very important in order to organize the fibred structure of the manifold. First from eq. (5.2) one reads off the vielbein E i µ defined as: The 6 × 6 matrix E i µ depends only on the s, t variables and on the angle θ (as we will see θ can be traded for the coordinate ρ = tan θ 2 and in the symplectic formalism it is a moment variable). The true angular variables are the phases of the three complex coordinates namely φ, τ , χ. As a next step one introduces the inverse vielbein which is just the matrix inverse of E i µ according with the definition This enables us to write the differentials of the coordinates as linear combinations of the vielbein dx µ = E µ j E E E i . The complex structure tensor in coordinate indices JW. Using the vielbein matrix and its inverse we can convert the frame indices of the complex structure tensor to coordinate ones and we get: Integration of the autodifferentials The matrix JW has three eigenvectors corresponding to the eigenvalue i and three corresponding to the eigenvalue −i (their complex conjugates). The three eigenvectors corresponding to i are the rows of the following matrix The coordinate w is obtained from the integration of dY 1 . Here we are not assisted by SU (2) invariance to define the exact coefficient in front of the differential. We choose a coefficient that appears reasonable from the result and what we obtain is either the coordinate w of other approaches or a power w a . In the sequel comparing with the construction from the iterative procedure we will see what is the correct identification of the power a. At the beginning our educated guess suggests the use of a coefficient 4/3. So we set where we have introduced the new functions: One necessary property that must be possessed by the function Φ(s) is: which defines the exceptional divisor at w = 0 Notice that with the ranges of the coordinates that we specified in (5.1), we see that u is a complex coordinate on a P 1 , while ν and w are complex coordinates on two copies of C. AMSY symplectic formalism and transcription of the metric in this formalism According to the formalism introduced by Abreu [60] and developed by Martelli, Sparks and Yau [61], in the case of toric Kähler varieties of complex dimension n, one can find moment maps µ i and angular variables Θ i such that the Kähler 2-form takes the universal form: At the same time there exist a function G(µ i ) of the n real moment variables, named the symplectic potential, such that the metric takes the following universal form: where by definition: G ij ≡ ∂ i,j G is the Hessian of the symplectic potential and G −1 ij is the inverse of the Hessian matrix. In our case the three angular variables are Θ Θ Θ = {φ, τ, χ} and the Kähler form is given by K as defined in eq. (5.5). Transforming the pseudo angle θ to the variable ρ by setting θ = 2 arctan ρ and implementing such change of variables in the Kähler form we obtain: which is compatible with eq. (6.1) if the coefficient of each of the three angular variables τ , χ , φ is a closed differential that can be integrated to a single new moment coordinate function of the real coordinates ρ, s, t. Hence we introduce the vector of moments: and the Kähler 2-form (6.3) can be rewritten as: provided we have defined the coordinate transformation: The unique inverse transformation of the above coordinate change is the following one: The new real coordinates are named u,v,w with gothic letters since they are the symplectic counterparts of the complex coordinates u, v, w yet, differently from the latter, we do not need the complex structure to find them and hence they are independent from the metric. Transcription of the metric in the toric symplectic form At this point we try to rewrite the metric depending on the two functions: in the symplectic form (6.2). Setting: we easily derive that ds 2 ort takes the form (6.2) with the following matrix G ij : It remains to be seen if we are able to retrieve the symplectic potential from which the above matrix is obtained through double derivatives. With some integrations and some educated guesses we find that the form (6.10) of the matrix can be reproduced if we write the symplectic potential as follows: and where G(v, w) is some function of the two fibre coordinates v,w only. With this choice the matrix G ij becomes: and the full-fledged expression of the line element can be obtained by substitution. Comparing the obtained result with eq. (6.10) we easily see that the functions M(v, w) = M (s, t) and F(v, w) = Φ(s, t) can be expressed in terms of the derivatives G (2,0) (v, w), G (0,2) (v, w), but in order to avoid other functions we get a second order differential constraint on the symplectic potential G(v, w) that relates its mixed derivatives to G (2,0) (v, w), G (0,2) (v, w). This differential is expressed in a simpler way by means of the original coordinates s, t. We shall presently derive it. We anticipate that its solution very strongly limits the possibilities so that it has to be discarded. In other words we have to accept a generic function G(v, w) and try to match it with the boundary conditions on the exceptional divisor. Orthotoric separation of variables and the symplectic potential In order to compare the generic metric in symplectic formalism provided by the symplectic potential displayed in eq.s (6.11), (6.12) with the following two-function metric 22 : (6.14) we make the following steps. First we regard the function G(v, w) as a function only of t and s, as it is evident from the transformation rule (6.7), and we write: G(v, w) ≡ Γ(t, s). By means of the transformation (6.7) we can rewrite the generic metric (6.2) produced by the symplectic potential (6.11-6.12) in terms of the variables s, t, instead of v, w. The result coincides with ds 2 2f un as given in eq. (6.14) if the following conditions hold true: The first two equations in (6.15) just provide the identification of the two functions M (s, t) and Φ(s, t) in terms of second order derivatives of the symplectic potential. On the other hand the last equation of (6.15) is a very strong constraint on the function Γ(t, s) which severely restricts the available choices of Γ(t, s). 6.3 The symplectic potential of the Ricci-flat orthotoric metric on totK WP [112] In the case of the canonical bundle totK WP [112] , whose Ricci-flat metric is given by eq. (5.9), eq.s (5.8) imply By means of two double integrations and modulo linear functions in s, t (they are irrelevant for the metric) we determine the explicit form of the potential Γ(t, s): The function Γ WP [112] (t, s) satisfies by construction the differential constraint encoded in the third of eq.s (6.16). Using the transformation rule (6.7) we can rewrite it as a function of the symplectic variables v, w. In this way we arrive at the following symplectic potential where we have used the liberty of adding linear functions of v or w to obtain the most convenient form of its reduction to the exceptional divisor, located at w = 3 2 . The function Kähler metrics on Hirzebruch surfaces and their canonical bundles For the case of the canonical bundle on F 2 , which is the complete resolution of the C 3 /Z 4 singularity, we have additional information that is relevant and inspiring for the general case. Let us summarize the main points. According to the results of [7] there is a well adapted system of complex coordinates that arise from the toric analysis of C 3 /Z 4 and of its resolution. These coordinates are named as follows: z i = {u, v, w} and are defined on a dense open chart reaching all components of the exceptional divisor. Their interpretation was already anticipated in eq. (5.12) and it is the following. The coordinate w spans the fibers in the canonical bundle Y π −→ F 2 while u, v span a dense open chart for the base manifold (i.e. the compact component F 2 of the exceptional divisor ED). In particular since F 2 is a P 1 bundle over P 1 , namely F 2 π −→ P 1 , the coordinate u is a standard Fubini-Study coordinate for the base P 1 while v spans a dense open chart of the fibre P 1 . This set of coordinates can be used for any F n Hirzebruch surface with n ≥ 1. The action of the isometry group (1.8) on these coordinates was described in [7] and it is as follows: The above explicit action of the isometry group on the u, v, w coordinates suggests the use of an invariant real combination and the assumption that the Kähler potential K Fn of the Kähler metric g Fn should be a function (up to trivial terms Ref (z)) only of n : The function G n ( n ) should also depend on two parameters (we name them ,α) which are associated to the volumes of the two homology cycles of F n , respectively named C 1 and C 2 that also form a basis for the homology group of the total space Y , namely the canonical bundle on F n . Indeed the homology of Y coincides with the homology of the base manifold F n . Introducing the Kähler two form: we need to find: where is a dimensionful parameter providing the scale and α is some dimensionless parameter parameterizing the ratio between the two volumes. The two toric cycles C 1,2 are respectively defined by the following two equations: As pointed out in [7], in addition to the above two properties of the Kähler form, if we consider the Ricci two-form of the Kähler metric on F n we must find: Ric Fn = 2 − n ; Ric Fn = 2 (7.8) It appears that eq.s (7.5-7.8) are strong constraints on the function G n ( n ). It is interesting to see how they are realized in the metric on F 2 obtained from the Kronheimer construction. We will show this below. The metric on F 2 induced by the Kronheimer construction In [7], relying on the Kronheimer construction, we have constructed an analytically defined Kähler metric on the total space of the canonical bundle of F 2 . The Kähler potential has only an implicit definition as the largest real root of a sextic equation. Yet its reduction to the compact exceptional divisor, which is indeed the 2nd Hirzebruch surface, is explicit and the Kähler potential of this metric can be exhibited in closed analytic form. We think that this information is very important for the comparison between the parameters of the Ricci-flat metric appearing in supergravity with those emerging in the Kronheimer construction that are the Fayet Iliopoulos parameters of the dual gauge theory. Following the chamber structure discussed in [7] we choose the chamber VI defined by the following inequalities on the three Fayet Iliopoulos parameters ζ 1,2,3 : and chamber VIII, defined instead by the following ones: Inside those two chambers we make the choice: For α > 0 we are in chamber VI, while for α < 0 we are in chamber VIII. For α = 0 we are instead on the wall where the non singular variety: denoting by totK M the total space of the canonical bundle of a Kähler manifold (or orbifold) M. The solution of the moment map equations for the two independent moment maps reduced to the exceptional divisor by performing the limit w → 0 is the following one: √ α 2 + 6α + 2 + 8 + 3α + + 4 2α 2 + 6α + 4 (7.14) The complete Kähler potential of the quotient is made of two addends, the pull-back on the constrained surface of the Kähler potential of the flat ambient metric plus the logarithmic term: In the present case we explicitly find: K 0 = 2 α α 2 + 6α + ( + 8) + 2 + 1 + α 2 + 6α + ( + 8) + α 2 + 3 α 2 + 6α + ( + 8) + α + (7.16) and K log = 2(α + 1) log √ α 2 + 6α + 2 + 8 + 3α + + 4 2α 2 + 6α + 4 − 2α log α 2 + 6α + ( + 8) + α + 2(α + 2) /vv (7.17) By explicit calculation we were able to verify that the Kähler potential of the quotient K quotient yields a metric satisfying all the constraints (7.5-7.8). We show this in section 8.3. Reduction to the exceptional divisor In this section we consider the reduction to the exceptional divisor for a generic metric of the class described in section 5, emphasizing that the Kähler metric induced on the divisor is completely determined by the real function P (t) of the real variable t. We carefully consider what are the differential constraints on such a function required by the topology and complex structure of the second Hirzebruch surface F 2 showing that they are all met by the P (t) function that one obtains by localizing the generalized Kronheimer construction of the C 3 /Z 4 singularity resolution on the exceptional divisor. The reduction The reduction to the exceptional divisor is obtained in the Kähler form and in the metric by setting s = −3. The Kähler form on the divisor is the following one while the metric is the following one: and it is completely determined by the function P (t). For the choice: it is the metric on the orbifold WP [112] while for other choices of P (t), obtainable from the Kronheimer construction, ds 2 ED can indeed be a good Kähler metric on the second Hirzebruch surface F 2 . From eq. (8.1) specifying the Kähler 2-form of the exceptional divisor and eq. (8.2) providing its Kähler metric, we immediately work out also the complex structure tensor that has the following appearence: Topology and the functions of the t coordinate We have two important informations on the topology of F 2 , which provide an extremely selective test in order to know whether a certain metric is indeed defined on F 2 or on some different twofold, may be degenerate. The tests are related with the integrals of the Kähler 2-form K and of the Ricci 2-form Ric on the two toric curves C 1,2 respectively defined by the vanishing of either coordinate (u, v) Indeed, as we illustrated in section 7 we must find The explicit reduction of the Kähler form K F 2 to the two cycles C 1 and C 2 is very simple when K F 2 is written in the basis of the real coordinates (t,θ,τ ,φ). Indeed in order to set v = 0 we have just to look for the zeros of the above defined function H(t) that depends by integration from P (t). Let us suppose that H(−|t max |) = 0. We obtain the reduction of the Kähler form to the cycle C 1 by setting t = −|t max | = const < 0, while we get the reduction to the cycle C 2 by setting θ = 0. Hence we see that in order to get F 2 as exceptional divisor we need two conditions, that are necessary, although not sufficient. 1. |t max | = 0 2. the range of the coordinate t must be finite [−|t max |,−|t min |] in order to get a finite size for the cycle C 2 If the zero of the function H(t) is at t=0 we immediately know that there is a degeneration and this is indeed the case of WP [112]. If we integrate the complex structure of the exceptional divisor displayed in eq. (8.4) with the same method we used for the whole 6-dimensional space, we find that the coordinate u is exactly the same as in eq. (5.17), while for v we find: Comparison with the result for v in the entire space (eq.s (5. 18-5.19)) tells us that the function Ψ(s) must be finite and non vanishing at s = −3 in order to have a consistent reduction to the divisor: The normalization Ψ(−3) = 1 can always be obtained by an irrelevant rescaling in the definition of v if −3 is not a zero of Ψ(s) while it must be a zero of Φ(s). Interpretation of the function H(t) From the explicit integration of the complex structure we obtain a very important interpretation of the function H(t) in relation with the complex Kähler geometry of the exceptional divisor. Since the Kähler metric on this two-fold has isometry SU(2) × U(1), SU(2) acting on the u variable by linear fractional transformation and on v by multiplication with the u-compensator (cu + d) 2 , as described in eq.s (7.1), the Kähler potential K can be a function only of the invariant combination ≡ 2 defined in eq. (7.2). Relying on the representation of u and v derived from the integration of the complex structure we easily obtain: It follows that: where H −1 denotes the inverse function. Since the range of √ is [0, ∞], it is necessary that the inverse function H −1 maps the semi-infinite interval [0, ∞] in a finite one [−|t max |, −|t min |] defined by: Topological constraints on the function P (t) Given the above topology results characterizing the second Hirzebruch surface and considering the metric of the divisor as given in eq. (8.2) and its Kähler form (8.1) we immediately obtain the conditions on the function P (t). Indeed, while calculating the Ricci form we can specify integral differential conditions on P (t) from the values of its periods mentioned above. We know the explicit form of the complex structure on the exceptional divisor that is obtained by reduction to s = −3 of the complex structure pertaining the full 6-dimensional manifold M 6 . The complex structure of the exceptional divisor was displayed in eq. (8.4). The Ricci form can be calculated by setting its antisymmetric components equal to Ric ij = J k i R kj where R kj is the standard Ricci tensor. In this way we obtain the following general result that exclusively depends on the function P (t): where A(t), B(t), C(t) are functions of the t-variable expressed as rational functions of P (t) and its first and second derivative with simple t-dependent coefficient. We do not write them explicitly for shortness. Then the Ricci 2-form can be easily localized on the two cycles C 1 and C 2 , yielding: Hence, in order to realize the second Hirzebruch surface not only the range of t must have finite extrema [−|t max |,−|t min |] but we should also have: The relation between the function P (t) and the Kähler potential K( ) of the exceptional divisor Our goal is that of determining a Ricci-flat metric on the canonical bundle totK F 2 , starting from a given bona fide Kähler metric on the second Hirzebruch surface, described in terms of the real variables t, θ, τ , φ. In the complex description, any Kähler metric is determined by a suitable Kähler potential; given the isometries and their realization on the chosen complex coordinates u, v, the Kähler potential for the F 2 surface is a real function of the invariant combination defined in eq. (7.2) which we generically denote K( ). Therefore it is important to determine the relation between the real variables and the standard complex ones at the same time with the relation between the Kähler potential K( ) and the function P (t) which determines the metric in the real variables. In this respect the essential point to be stressed is that the relation between the real variables and the complex ones is not universal and fixed once for all, rather it depends on the choice of the Kähler potential or viceversa of the function P (t). Hence it is convenient to introduce a name for the inverse function: and find its differential relation with the Kähler potential which follows from a comparison between the metric as determined in complex Kähler geometry from K( ) and as written in real variables. For convenience we rewrite the general real form of the metric on the exceptional divisor in the following more compact way which clearly displays the fibred structure of the exceptional divisor. Next we convert the metric in eq. (8.17) using the substitution rule In this way we transform the metric (8.17) to the complex coordinates u, v and we compare it with the generic metric obtained from a generic Kähler potential K( ). We find that the two metrics coincide provided the following two conditions are satisfied: Given the Kähler potential K( ), which is supposed to depend also on a deformation parameter, the above equation (8.19) allows to rewrite the same metric in real coordinates, provided one is able to invert the first formula, namely, to find as a function of t and of the deformation parameter α. The Kronheimer Kähler potential for the F 2 surface and its associated P (t) function From the Kronheimer construction of the C 3 /Z 4 resolution reduced to the exceptional divisor we have the Kähler potential derived in section 7.1. The result obtained in eq.s (7.15,7.16,7.17) can be summarized writing the following general form of the Kähler potential: 3 + ( + 8) This implies that the interval [0,∞] of is mapped into the interval [0,-3/2] and this suffices to guarantee that the cycle C 1 is contracted to zero as we have already explained. Finally for the function P (t), using the above general formulae we get: First we can verify that when α is either −1 or −2, the surface degenerates, as the metric depends only on the variable u and no longer on v. Using the formula (8.19) we can calculate t and P (t). We find the following relatively complicated answer: α 2 + 6α + ( + 8) + 16 + 8 α 2 + 6α + ( + 8) + 3 2 +4 2 α 2 + 6α + ( + 8) + + 8 + α 2 6 α 2 + 6α + ( + 8) + 2 + 19 +α 3 α 2 + 6α + ( + 8) + 9 D T = 4 α 2 + 6α + ( + 8) The new function G T ( ) maps the interval [0,∞] of into the interval − 3α 8 , − 3 8 (4 + 3α) so that the range of the negative variable t is t ∈ − 3 8 (4 + 3α), − 3α 8 (8.24) and, as expected, the cycle C 1 does not shrink to zero unless α = 0. Quite surprisingly the function G T ( ) can be easily inverted and we find: while for P (t) we get: and we verify that P (t, 0) = −3 2t 2 + 3t (8.27) which is the correct result for the singular case WP [112]. In terms of the function F (t) parameterizing the metric (8.17) we have: The above structure of the function F (t, α) is very much inspiring. As we see, it is just the sum of three simple poles that are alternatively simple poles of the dt 2 -coefficient and zeros of the coefficient of the (dτ + (1 − cos(θ)dφ) 2 -term. The range of the variable t turns out to be the interval between two such poles where the sign of the function F (t) is the correct one for in order for the metric (8.17) to have Euclidian signature. The three poles are: We also see what is the mechanism of the degeneration producing the singular WP [112] case: the two poles t 1 and t 3 come to coincide and the coincidence point is zero. This produces the vanishing of the C 1 -cycle as we explained above. Substituting the function F (t, α) as given in eq. (8.28) into the metric we get a final form of a specific Kähler metric on the second Hirzebruch surface which follows from the Kronheimer construction. This metric provides the boundary condition for the Ricci-flat metric on the canonical bundle totK F 2 which must reduce to it when setting ds = 0, dχ = 0 and s = −3. Verification of the topological conditions for the Kähler metric of F 2 . As a matter of check we calculate the periods of the Kähler and Ricci 2-forms also in the real formalism, obtaining the following expected result which holds true for 0 <| α |< 1: The above result for the Kähler form is immediate once the function P (t) = P (t, α) is specified. It is instead interesting to see the subtle way in which the result for the Ricci form is obtained independently from the value of α. Calculating the Ricci tensor of the metric in eq. (8.17) with the function F (t, α) of eq. (8.28) we find the symmetric matrix Ric which, multiplied by the transpose of the complex structure (8.4) with P (t) as in eq. (8.26) produces the Ricci form Ric ED with the structure displayed in eq. (8.13) and the following explicit expressions for the functions A(t) and C(t). The exceptional divisor in symplectic coordinates. Considering next the description of the 6-dimensional manifold M 6 in terms of symplectic coordinates {u, v, w, φ, τ, χ} (see sect.6) we easily find that the localization s = −3 of the exceptional divisor corresponds to w = 3 2 , dw = dχ = 0. Hence defining where G (v, w) is the variable part of the overall symplectic prepotential, we obtain that the Kähler metric on the exceptional divisor has also a description in terms of a symplectic potential given by with moment and angular variables µ i = {u, v}, Θ j = {φ, τ } and line element as follows: where the two matrices are: Reduced to the exceptional divisor, the coordinate transformation (6.6) is very simple. We have: u = 3 4 t(−1 + cos θ), v = − 3t 4 . So if we declare that the function D(v) = Π(t), is a function of t we obtain D (v) = 16 9 Π (t) and replacing these transformation in (8.34-8.35) we obtain that the line element in symplectic coordinates coincides with the line element of eq. (8.2) provided that: So the function F (t) determining the Kähler geometry of the exceptional divisor, linked to its Kähler potential by eq. (8.19), is just 4/3× the second derivative of the non-fixed part of the symplectic potential. The case of the Kähler metric on F 2 with generic α. Applying the above scheme to the Kähler metric on F 2 induced by the Kronheimer construction, namely utilizing in eq. (8.38) F (t) = F (t, α) as given in eq. (8.28) we obtain the following differential equation: We also find: The Monge-Ampère equation and its series expansion In this section we arrive at the core of the issue, i.e. the construction of Ricci-flat metrics on the spaces we are concerned with. The common general feature of these is that they are the total space of the canonical bundle of a complex two-dimensional compact Kähler manifold M 4 , the exceptional divisor when the total space is the full or partial resolution of a quotient singularity. In this interpretation the base of the canonical bundle is indeed the exceptional divisor produced by the blow up of an isolated singular point. The additional common structural feature of the Ricci-flat metrics we want to consider is, as we already stressed several times, the group of continuous isometries that they should possess, mentioned in equation (1.8). The action of G iso on the three complex coordinates u, v, w that originate from the integration of the complex structure was displayed in eq.s (7.1). The presence of these isometries imposes very stringent constraints on the Kähler metric that are most efficiently handled at the level of the potential P from which the metric can be obtained by means of derivatives. The condition of Ricci-flatness of the metric is translated into a nonlinear differential equation to be satisfied by the potential P that we name the Monge-Ampère equation. As we have seen in the previous pages, there are three equivalent formulations of the Kähler geometry of the toric six-dimensional manifolds M 6 we are concerned with: A) The complex setup where the geometry is encoded in the Kähler potential P = K (u, v, w, u, v, w) B) The symplectic setup where the geometry is encoded in the symplectic potential P = G (u, v, w) C) The hybrid setup where the geometry is encoded in the symplectic potential, but instead of the coordinates v, w we use the coordinates s,t related to them by the coordinate transformation (6.6-6.7). Correspondingly there are, to begin with, two formulations of the Monge-Ampère equation, one for the Kähler potential, one for the symplectic potential. In both cases the constraints imposed by the chosen isometries reduce the effective potential to be a function of only two real variables so that the Monge-Ampère equation is a non linear partial differential equation in two variables. At this point the symplectic case still splits into two versions depending on whether we employ the pure symplectic variables or the hybrid ones s, t. In all formulations, as we show below, the equation has the property that we can fix as boundary condition an arbitrarily chosen Kähler metric on the exceptional divisor. The Monge-Ampère equation for the Kähler potential We begin with the Monge-Ampère equation written in terms of the Kähler potential. It follows from the chosen isometries that the Kähler potential K must be a function only of the two invariants: so that we can set: The use of the alternative combination T simplifies the Kähler potential in certain cases. The Monge-Ampère equation in this setup is simply the statement that the determinant of the Kähler metric is constant. Indeed in the complex coordinate setup the hermitian Ricci tensor is obtained from the logarithm of the metric determinant in the same way as the Kähler metric is obtained from the Kähler potential: where κ is a constant parameter, the Ricci tensor is necessarily zero and we have a Ricci-flat metric. The Monge-Ampère equation is obtained by replacing in eq. (9.4) the expression of det g in terms of derivatives of the Kähler potential G(T , f). Relying on the definition of the invariants provided in eq. (9.1) we obtain: One can solve the Monge-Ampère equation in the above form by developing the Kähler potential in power series of f: where G 0 (T ) is the Kähler potential of a convenient Kähler metric defined over the exceptional divisor. Indeed it is a property of the considered system that inserting (9.6) into the Monge-Ampère equation (9.5), the function G 0 (T ) corresponding to the Kähler potential of the Kähler metric on the exceptional divisor is undetermined, while all the other G n (T ) functions can be iteratively determined in terms of the previous G k<n (T ). As we discussed before, it is quite remarkable that on the exceptional divisor located at s = −3 the Ricci-flat orthotoric metric (5.9) reduces precisely to the Kähler metric on WP [112], which was obtained in [7] from the Kronheimer construction while performing the partial resolution of the C 3 /Z 4 singularity on a type III wall. The Monge-Ampère equation of Ricci-flatness for the symplectic potential According to [60,61] the condition for Ricci-flatness can be written as a differential condition on the symplectic potential which is the following where c h are some constants. In the case of our general metric with isometry SU(2) × U(1) × U(1), the symplectic form of the Monge-Ampère equation simplifies since we have the particular form (6.13) of the matrix G ij . Indeed we find: This facilitates the study of the Ricci-flatness condition because the coefficients c u and c v are already fixed by the need to reproduce the u-dependence of detHes. We easily find: Hence in the symplectic formalism the Monge-Ampère equation for Ricci flatness reduces to the following relation: imposed solely on the function of two variables G[v,w]. We have explicitly verified that the function G WP [112] (v, w) defined in equation (6.18), which corresponds to the orthotoric Ricci-flat metric on totK WP [112] satisfies eq. (9.12) with: Discussion of the boundary condition As we show below, differently from the case of the Monge-Ampère equation for the Kähler potential in the symplectic case, there is a subtle issue concerning the choice of boundary condition to be imposed on the function while restricting it to the exceptional divisor. The important point is that at the level of the metric the limit w → 3 2 should reproduce the metric on the divisor derived from the potential D(v) = G v, 3 2 . There are only two ways to obtain this. If the symplectic potential G(v, w) is holomorphic at w = 3 2 and admits a Taylor series expansion in w − 3 2 we are obliged to impose that ∂ w G(v, w) be a constant at w = 3 2 and this results in a recursive solution with coefficients that are rational functions of increasing order and can hardly define a convergent series. Furthermore the only known solution of the Monge Ampère equation, provided by the function (6.18) corresponding to the orthotoric metric on totK WP [112] has not this holomorphic behavior. Indeed G WP [112] (v, w) provides a paradigmatic example of the other possible boundary condition which foresees a logarithmic singularity of the symplectic potential while approaching the exceptional divisor: In the sequel we show that with the second type of boundary condition we can reconstruct the known solution G WP [112] (v, w) of equation (6.18) and also derive a series solution pertaining to the smooth F 2 case which displays the same general features as G WP [112] (v, w). Unfortunately, up to the present moment we can only give numerical evidences of the last statement. In view of what we explained above we skip the details concerning the first type of boundary condition (holomorphicity at w = 3 2 and jump directly to the case of a logarithmic singularity at w = 3 2 . Indeed, a logarithmic singularity is known to be the correct behaviour to ensures smoothness of the toric Kähler metrics near to divisors [62,63,60]. 9.3 The boundary condition with a logarithmic singularity at w = 3 2 We implement the second type of boundary condition requiring that following two properties should be preserved: a) The symplectic potential G(v, w) has a finite limit for w → 3 2 b) The limit for w → 3 2 of the bundle metric should be exactly the exceptional divisor metric (8.36-8.37) Namely we must have: lim To discuss this alternative boundary condition it is convenient to use rescaled variables defined as follows In terms of such variables the Monge-Ampère equation (9.12) becomes Instead of assuming that G(x, ω) is holomorphic at ω = 0, we impose that it has a logarithmic singularity of the form ω log ω. Indeed this is the unique alternative way in which the metric on the total space can reduce to the metric exceptional divisor in the limit ω → 0. Furthermore this behavior for w → 3 2 is precisely that displayed by the symplectic potential G WP [112] (v, w) explicitly written down in eq. (6.18). Hence we assume the following different series expansion which isolates a logarithmic singularity at ω = 0: The function G 0 (x) is free. All the functions G k (x) (k ≥1) are determined in terms of G 0 (x). For instance we have: from the formal solution discussed in the previous section we obtain: (9.21) where N k+1 (x, ∆) and D k+1 (x, ∆) are polynomials whose degrees are as follows: Hence the degree of the coefficient of ω k+1 in the series expansion is a rational function of x of degree −k, a feature that looks promising for convergence. By means of a dedicated MATHEMATICA code we can calculate the polynomials N k+1 (x, ∆), D k+1 (x, ∆) to any desired order. For reason of typographical space we display here only the first terms up to order k = 2. Since so far we have not been able to guess the sum of the series in terms of elementary or higher transcendental functions, to get some understanding of the solution we have resorted to a numerical study of the approximants to the solution obtained by truncating the series in eq. (9.21) to various orders performing the plots. The relevant thing is that for the special value ∆ = 0 of the parameter we know the exact sum of the series. It is provided by the symplectic potential (6.18) which pertains to the case of totK WP [112] . This fortunate occurrence enables us to compare the plot of the exact function with those of its approximants. This comparison, as we are going to see, turns out to be quite inspiring since it elucidates the meaning of certain oscillatory behaviors of the approximants that are completely analogous in the case ∆ = 0, where we know the sum of the series and in the case ∆ > 0 where the sum is unknown. In terms of the variables x and ω the symplectic potential of the orthotoric metric takes the following explicit expression: We omit the explicit presentation of the rational functions that we have calculated by means of a computer programme up to order k = 10 and higher. We rather present the plots of such approximants. Let us first consider the plot of the function G ∆=0 (x, ω) displayed in fig.6. As we distinctly see from the picture, the exact function, namely the sum of the infinite series in ω defines parametrically a perfectly smooth surface in three dimensions that however features a nontrivial structure provided by a sort of smooth bending along a line that starts approximately at x = 9 4 , ω = 0 and goes up towards x = 9 2 , ω = ∞. The geometrically meaning of this bending is not entirely clear, yet one can guess that it corresponds to a transition region from a near divisor geometry to an asymptotic geometry that is that of a metric cone over the Sasakian orbifold S 5 /Z 4 . It is now interesting to compare the behavior of the exact function with its approximants obtained truncating the series to various orders. Let us now consider the plots displayed in fig.7. In the plot on the right, the surface plotted in the middle is the sum of the series (i.e. the exact function), while the other two surfaces, respectively bending, one up, the other down, are two consecutive approximants (the first of even order, the second of odd order). As we clearly see, the series converges to the exact function and does it rapidly, in the region before the bending structure illustrated above. As we come close to such a line of bending the series no longer converges and its various truncations oscillate violently creating a peculiar canyon. Let us now compare this behavior of the case ∆ = 0 with that of the series solution for ∆ = 3 4 . To this effect let us consider the figure 8. The structure of the plots of the truncated series are qualitatively the same in the case ∆ = 3 4 , as they are in the case ∆ = 0. Furthermore, in a completely analogous way to the case ∆ = 0, for small values of ω and x also the series representation of G rapidly to some well defined function while approaching the region of the bending it starts oscillating. Hence we are led to conclude that we should be able to retrieve an analytically defined solution of the Monge-Ampère equation for the symplectic potential which reduces to the Kronheimer metric on F 2 at ω = 0. It is a matter of finding some alternative way of summing the series by a smart change of variables or by means of some smart integral transform. The Hybrid version of the Monge-Ampère equation The most promising setup to study the MA equation for the symplectic potential is the hybrid one. Working in the s, t coordinates defined in eq.s (6.6-6.7) and setting G(v, w) = Γ(t, s), the equation (9.12) (x, ω) the sum of whose series representation is unknown. As in the other cases the plot on the right is for small values of ω and displays two consecutive approximants of order 7 and 8, respectively, while the plot on the left extends to large values of ω and displays several approximants. is transformed into the following one: where: It is an important observation that the term A is the square of the constraint whose vanishing implies the orthotoric separation of the s, t variables (see the last of eq.s (6.15). It is interesting to see how with this separation of variables, namely when A = 0, the differential equation ( where Π(t) is an unknown function of t that we would like to identify with the symplectic potential of the exceptional divisor metric and Y 1,2 (s) are also two unknown functions of s. On the other hand the other two functions entering the ansatz are integral differential functionals of Y 1,2 (s) and Π(t), respectively : With these choices the term C in eq. (9.26) splits into separate functions of different variables: On the other hand we find that A = 0 while the B-term factorises as follows: In this way the solution of the MA equation reduces to the solution of two separate integral differential equations one in the s variable, one in the t-variable: We focus on the first in the variable t. With rather simple manipulations it can be reduced to an ordinary differential equation of higher order, namely: which is a differential equation of the first order for the function F (t). Apart from an integration constant which is fixed by the topological constraints on the periods of Ricci form, the unique solution of eq. (9.35) is F (t, 0) corresponding to the geometry of WP [112]. This shows that in order to impose a boundary function consistent with α = 0 we need to modify the ansatz (9.27) in such a way as to introduce a certain s, t-mixing. Conclusions As we advocated in the introduction, the present paper is an illustration of the conjecture 1.1 for which we have strong support from the fact that it is verified for the value ∆ = 0 of the parameter in the paradigmatic case of the C 3 /Z 4 singularity resolution. Further numerical evidence emerges from the study of the power series solution of the Monge-Ampère equation in the symplectic potential formulation. This latter in its hybrid version seems to provide the most promising approach since different series expansions might be glued together to prolong the solution beyond the valleys of oscillations. Assuming that in due time our conjecture can be transformed into a proof, we would like to stress its relevance. According to our view point, Conjecture 1.1 provides a precise mathematical relationship to realise the gauge/gravity correspondence in a proper way. The generalized Kronheimer construction fixes all the items of the gauge theory on the brane world-volume: field content, gauge group, flavor symmetries and interactions. As maintained by Conjecture 1.1, the same Kronheimer construction determines, via the Monge-Ampère equation, also the Ricci-flat Kähler metric to be used in the construction of the dual D3-brane solution of supergravity. If 1.1 is proved we can say that, for the class of theories realised on D3 branes at C 3 /Γ Calabi-Yau singularities, the McKay quiver determines uniquely both sides of the correspondence. the set of maximal ideals of C[S σ ] with the Zariski topology. Basically following [64], we delineate a procedure to find the equations for the affine toric variety X. We remind that a Hilbert basis H σ for the semigroup S σ is a minimal set of generators for S σ which contains the rational generators of the rays of σ ∨ . Define D = #H σ . Then the elements of H σ are related by D − n relations, which generate an ideal I σ,0 of C[x 1 , . . . , x D ]. Given two ideals I, J in a ring R, the saturation of I with respect to J is defined as Then one proves that the ideal I σ of X σ in C D is the saturation of I σ,0 wih the respect to the ideal Equations for X = C 3 /Z 4 . Now we check that C 3 /Z 4 is not a schematic complete intersection, as noted in [65]. Realizing X as in equation (A.1) we can take for σ the cone with generators (1, 0, 0), (−1, 2, 0), (0, −1, 2) in the latttice N = Z 3 . The dual cone σ ∨ has rational generators (4, 2, 1), (0, 2, 1), (0, 0, 1) in M Z 3 . A Hilbert basis of S σ is obtained by adding the lattice points (1, 1, 1), (0, 1, 1), (1, 2, 1), (2, 1, 1), (2, 2, 1), (3, 2, 1). Assigning variables x 1 , . . . , x 9 to these lattice points we obtain that I σ,0 is generated by the 6 equations x 1 x 8 − x 2 9 = 0, x 2 x 2 9 − x 3 8 = 0, x 3 x 2 9 − x 2 7 x 8 = 0, x 4 x 9 − x 7 x 8 = 0, x 5 x 2 9 − x 7 x 2 8 = 0, x 6 x 9 − x 2 8 = 0 Saturating this ideal with respect to K = (x 1 · · · x 9 ) one sees that I σ is generated by the 20 quadratic equations (the equation needed to cut X from C 9 with the correct schematic structure): x 2 8 − x 6 x 9 ; x 7 x 8 − x 4 x 9 ; x 6 x 8 − x 2 x 9 ; x 4 x 8 − x 4 x 5 ; x 1 x 8 − x 2 9 ; x 6 x 7 − x 5 x 9 ; x 5 x 7 − x 3 x 8 ; x 4 x 7 − x 3 x 9 ; x 2 x 7 − x 5 x 8 ; x 2 6 − x 2 x 8 ; These are a minimal set of generators. So X is the intersection of 20 quadrics in C 9 . All these quadrics are singular along their intersection with a plane of codimension 3 (when their equation contains a square) or 4 (when their equation does not contain a square). The dimension of the singular locus is 6 and 5 respectively (not 5 and 4!) It may be interesting to see what variety does the ideal I σ,0 describe. To this end one computes the primary decomposition of the ideal [66]. This yields 5 ideals; one is radical, and coincides with I σ , so that one component of the variety is C 3 /Z 4 . The other ideals are generated by monomials, and correspond to (intersections of) coordinate planes of different dimensions, counted with multiplicities. Let us now discuss our main example, the orbifold C 3 /Z 4 . In the table below we summarise the action of the three non-trivial elements of g ∈ Z 4 , including the identifications both in the (ϕ 1 , ϕ 2 , ϕ 3 ) and the (φ, β, ψ) coordinates. g : (z 1 , z 2 , z 3 ) {a 1 , a 2 , a 3 } (ϕ 1 , ϕ 2 , ϕ 3 ) ∼ (φ, β, ψ) ∼ (i, i, −1) {1, 1, 2} (ϕ 1 + π 2 , ϕ 2 + π 2 , ϕ 3 + π) (φ, β + π, ψ + 3π) (−1, −1, 1) {2, 2, 0} (ϕ 1 + π, ϕ 2 + π, ϕ 3 ) (φ, β + 2π, ψ) {3, 3, 2} (ϕ 1 + 3π 4 , ϕ 2 + 3π 4 , ϕ 3 + π) (φ, β + 3π, ψ + 3π) (B.25) As we see, in either of these two sets of angular coordinates the identifications are not diagonal. In the coordinates (φ, β, ψ) the clearest identification is the action of (junior) element {2, 2, 0}, which implies that the base space, with metric in the first line of (B.7), is P 2 /Z 2 . The action of the (junior) element {1, 1, 2} means that as β goes half way around its circle, the coordinate ψ goes once around the ψ-circle, with period 3π. The action of the (senior) element {3, 3, 2} is simply a consequence of the previous two. In order to clarify the orbifold action on S 5 , it is useful to adopt a set of angular coordinates in which the Z 4 action is diagonal. It is then simple to verify that this is achieved precisely by the original coordinates (φ, τ, χ) defined in (5.1). We summarise this diagonal action in the This shows that the indeed, the Z 4 action on C 3 induces the correct Z 4 action on the asymptotic metric on S 5 . In order to further clarify the orbifold action on S 5 , it is convenient to rewrite the metric (B.7) in the form of a circle fibration over a base space, that turns out to be precisely WP [112]. In particular, rearranging the terms in (B.7) we find ds 2 X 5 = ds which clearly displays the fact that S 5 /Z 4 arises as the total space of a circle fibration over WP [112], equipped with the metric (B.28). We decorated this metric with a tilde to distinguish it from the different metric on WP [112], that we discuss in the main body of the paper, namely the metric (8.2) induced on the exceptional divisor by the Ricci-flat metric (5.9). Below we will rewrite the latter metric in different coordinates, to facilitate the comparison with the metric in (B.28). Let us discuss briefly how to see that the underlying (singular) variety to the metric defined in (B.28) is indeed WP [112]. With the ranges of coordinates and periodicities σ ∈ [0, π 2 ], θ ∈ [0, π], φ ∈ [0, 2π], τ ∈ [0, 2π] we see that near to σ ≈ 0 the metric develops an R 4 /Z 2 singularity (it is a cone over the Lens space S 3 /Z 2 ), while near to σ ≈ π 2 , the space shrinks smoothly to S 2 × R 2 . Following a reasoning analogous to that in the main body of the paper, one can see that there exists only one non-trivial two-cycle while the other two-cycle of F 2 , that would be defined by C 1 ⇔ {t = t max = − 3 2 sin 2 σ max = 0} is shrunk to zero size in the above metric 25 . From the metric (B.27) we now read off the connection one-form A ≡ 2 sin 2 σ 1 + 3 cos 2 σ (dτ − cos θdφ) (B.30) whose associated first Chern class can be integrated on C 2 to give showing that indeed this is a connection on the unit circle bundle inside the canonical bundle of WP [112]. To summarise, in this appendix we have shown that the orbifold action of Z 4 on S 5 , induced by the C 3 /Z 4 quotient, is not diagonal in the canonical coordinates where the Sasaki-Einstein metric on S 5 can be viewed as a U (1) fibration over P 2 with its Kähler-Einstein metric. This action is diagonalised precisely by the coordinates (φ, τ, 4 3 χ) used in the main part of the paper, and adapting the metric to these coordinates, it takes the form of a U (1) fibration over WP [112], with the non-Einstein metric (B.28). This is precisely the unit circle bundle in the canonical line bundle over WP [112]. The metric on the exceptional divisor of the partial resolution, induced from the orthotoric Ricci-flat metric, is a similar, but manifestly different non-Einstein metric (B.32). Statement about conflict of interest: On behalf of all authors, the corresponding author states that there is no conflict of interest.
27,623.4
2021-05-25T00:00:00.000
[ "Physics" ]
Extracting Code Resource from OWL by Matching Method Signatures using UML Design Document UML Extractor Software companies develop projects in various domains, but hardly archive the programs for future use. The method signatures are stored in the OWL and the source code components are stored in HDFS. The OWL minimizes the software development cost considerably. The design phase generates many artifacts. One such artifact is the UML class diagram for the project that consists of classes, methods, attributes, relations etc., as metadata. Methods needed for the project can be extracted from this OWL using UML metadata. The UML class diagram is given as input and the metadata about the method is extracted. The method signature is searched in OWL for the similar method prototypes and the appropriate code components will be extracted from the HDFS and reused in a project. By doing this process the time, manpower system resources and cost will be reduced in Software development. KeywordsComponent: Unified Modeling language, XML, XMI Metadata Interchange, Metadata, Web Ontology Language, Jena framework. INTRODUCTION The World Wide Web has changed the way people communicate with each other.The term Semantic Web comprises techniques that dramatically improve the current web and its use.Today's Web content is huge and not wellsuited for human consumption.The machine processable Web is called the Semantic Web.Semantic Web will not be a new global information highway parallel to the existing World Wide Web; instead it will gradually evolve out of the existing Web [1].Ontologies are built in order to represent generic knowledge about a target world [2].In the semantic web, ontologies can be used to encode meaning into a web page, which will enable the intelligent agents to understand the contents of the web page.Ontologies increase the efficiency and consistency of describing resources, by enabling more sophisticated functionalities in development of knowledge management and information retrieval applications.From the knowledge management perspective, the current technology suffers in searching, extracting, maintaining and viewing information.The aim of the Semantic Web is to allow much more advanced knowledge management system. To develop such a knowledge management system the software company's can make use of the already developed coding.That is to develop new software projects with reusable codes.The concept of reuse is not a new one.It is however relatively new to the software profession.Every Engineering discipline from Mechanical, Industrial, Hydraulic, Electrical, etc, understands the concept of reuse.However, Software Engineers often feel the need to be creative and like to design "one time use" components.The fact is they come with unique solution for every problem.Reuse is a process, an applied concept and a paradigm shift for most people.There are many definitions for reuse.In plain and simple words, reuse is, "The process of creating new software systems from existing software assets rather then building new ones". Systematic reuse of previously written code is a way to increase software development productivity as well as the quality of the software [3,4,5].Reuse of software has been cited as the most effective means for improvement of productivity in software development projects [6,7].Many artifacts can be reused including; code, documentation, standards, test cases, objects, components and design models.Few organizations argue the benefits of reuse.These benefits certainly will vary organization to organization and to a degree in economic rational.Some general reusability guidelines, which are quite often similar to general software quality guidelines, include [8] ease of understanding, functional completeness, reliability, good error and exception handling, information hiding, high cohesion and low coupling, portability and modularity.Reuse could provide improved profitability, higher productivity and quality, reduced project costs, quicker time to market and a better use of resources.The challenge is to quantify these benefits. For every new project Software teams design new components and code by employing new developers.If the company archives the completed code and components, they can be used with no further testing unlike open source code and components.This has a recursive effect on the time of development, testing, deployment and developers.So there is a base necessity to create system that will minimize these factors.http://ijacsa.thesai.org/Code re-usability is the only solution for this problem.This will reduce the development of an existing work and testing.As the developed code has undergone the rigorous software development life cycle, it will be robust and error free.There is no need to re-invent the wheel.To reuse the code, a tool can be create that can extract the metadata such as function, definition, type, arguments, brief description, author, and so on from the source code and store them in OWL.This source code can be stored in the HDFS repository.For a new project, the development can search for components in the OWL and retrieve them at ease.The OWL represents the knowledgebase of the company for the reuse code. The projects are stored in OWL and the source code is stored in the Hadoop Distributed File System (HDFS) [9].The client and the developer decide and approve the design document.For the paper the UML class diagram is one such design document considered as the input for the system.The method metadata is extracted from the UML and passed to the SPARQL to extract the available methods from the OWL.Selecting appropriate method from the list the code component is retrieved from the HDFS.The purpose of using an UML diagram as input is before developing software this tool can be used to estimate how many methods is to be developed by extraction.The UML diagram is a powerful tool that acts between the developer and the user.So it is like a contract where both parties agree for software development using UML diagram.After extracting the methods from the UML diagram these methods are matched in the OWL.From the retrieved methods the developer can account for how many are already available in the repository and how many to be developed.If the retrieved methods are more the development time will be shorter.To have more method matches the corporate should store more projects.The uploading of projects in the OWL and HDFS the corporate knowledge grows and the developers will use more of reuse code than developing themselves.Using the reuse code the development cost will come down, development time will become shorter, resource utilization will be less and quality will go up. The paper begins with a note on the related technology required in Section 2. The detailed features and framework for source code retriever is found in Section 3. The Keyword Extractor for UML is in section 4. The Method Retriever by Jena framework is in section 5.The Source Retriever from the HDFS is in section 6.The implementation scenario is in Section 7. Section 8 deals with the findings and future work of the paper. A. Metadata Metadata is defined as "data about data" or descriptions of stored data.Metadata definition is about defining, creating, updating, transforming, and migrating all types of metadata that are relevant and important to a user's objectives.Some metadata can be seen easily by users, such as file dates and file sizes, while other metadata can be hidden.Metadata standards include not only those for modeling and exchanging metadata, but also the vocabulary and knowledge for ontology [10].A lot of efforts have been made to standardize the metadata but all these efforts belong to some specific group or class.The Dublin Core Metadata Initiative (DCMI) [11] is perhaps the largest candidate in defining the Metadata.It is simple yet effective element set for describing a wide range of networked resources and comprises 15 elements.Dublin Core is more suitable for document-like objects.IEEE LOM [12], is a metadata standard for Learning Objects.It has approximately 100 fields to define any learning object.Medical Core Metadata (MCM) [13] is a Standard Metadata Scheme for Health Resources.MPEG-7 [14] multimedia description schemes provide metadata structures for describing and annotating multimedia content.Standard knowledge ontology is also needed to organize such types of metadata as content metadata and data usage metadata. B. Hadoop & HDFS The Hadoop project promotes the development of open source software and it supplies a framework for the development of highly scalable distributed computing applications [15].Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment and it also supports data intensive distributed application.Hadoop is designed to efficiently process large volumes of information [16].It connects many commodity computers so that they could work in parallel.Hadoop ties smaller and low-priced machines into a compute cluster.It is a simplified programming model which allows the user to write and test distributed systems quickly.It is an efficient, automatic distribution of data and it works across machines and in turn it utilizes the underlying parallelism of the CPU cores.The monitoring system then rereplicates the data in response to system failures which can result in partial storage.Even though the file parts are replicated and distributed across several machines, they form a single namespace, so their contents are universally accessible.Map Reduce [17] is a functional abstraction which provides an easy-to-understand model for designing scalable, distributed algorithms. C. Ontology The key component of the Semantic Web is the collections of information called ontologies.Ontology is a term borrowed from philosophy that refers to the science of describing the kinds of entities in the world and how they are related.Gruber defined ontology as a specification of a conceptualization [18].Ontology defines the basic terms and their relationships comprising the vocabulary of an application domain and the axioms for constraining the relationships among terms [19].This definition explains what an ontology looks like [20].The most typical kind of ontology for the Web has taxonomy and a set of inference rules.The taxonomy defines classes of objects and relations among them.Classes, subclasses and relations among entities are a very powerful tool for Web use. III. SOURCE CODE RETRIEVER FRAMEWORK The Source Code Retriever makes use of OWL is constructed for the project and the source code of the project is stored in the HDFS [21].All the project information of a software company is stored in the OWL.The size of the project source will be of terabytes and the corporate branches are http://ijacsa.thesai.org/spread over in various geographical locations so, it is stored in Hadoop repository to ensure distributed computing environment.Source Code Retriever is a frame work that takes UML class diagram or XMI (XML Metadata Interchange) file as an input from the user and suggests the reusable methods for the given Class Diagram.The Source Code Retriever consists of three components: Keyword Extractor for UML, Method Retriever and Source Retriever.The process of the Source Code Retriever Framework is presented in the "Fig. 1 The XMI is an Object Management Group (OMG) standard for exchanging metadata information using XML.The initial proposal of XMI "specifies an open information interchange model that is intended to give developers working with object technology the ability to exchange programming data over the Internet in a standardized way, thus bringing consistency and compatibility to applications created in collaborative environments."The main purpose of XMI is to enable easy interchange of metadata between modeling tools and between tools and metadata repositories in distributed heterogeneous environments.XMI integrates three key industry standards: (a) XML -a W3C standard (b) UML -an OMG (c) MOF -Meta Object Facility and OMG modeling and metadata repository standard.The integration of these three standards into XMI marries the best of OMG and W3C metadata and modeling technologies allowing developers of distributed systems share object models and other Meta data over the Internet. The process flow of Keyword Extractor for UML is given in the "Fig.2".The XMI or UML file is parsed with the help of the SAX (Simple API for XML) Parser.SAX is a sequential access parser API for XML.SAX provides a mechanism for reading data from an XML document.SAX loads the XMI or UML file and get the list of tags by passing name.It gets the attribute value of the tags by attributes.getValue(<Name of the attributes>) method.The methods used to retrieve the attributes are Parse, Attributes and getValue(nameOfAttibute).The Parse() method will parse the XMI file.The Attribute is to hold the attribute value.GetValue(nameOfAttibute) method returns class information, method information and parameter information of the attribute. UML Extractor Class Name (Name, scope) Method Information ( Name, type) Parameter Information The XMI file consists of XML tags.To extract class information, method information and parameter information are identified with the appropriate tag as given in the Table I.Using the tags the metadata of the UML or the XMI is extracted.The extracted metadata are class, methods, and attributes etc., which are passed to the Method Retriever component. V. METHOD RETRIEVER Method Retriever component interact with the OWL and returns the available methods from the OWL for the given class diagram is represented diagrammatically in "Fig.3".The extracted information from the UML file by the Keyword Extractor for UML is passed to the Method Retriever component.It interacts with OWL and retrieves matched method information using SPARQL query.SPARQL is a Query language for RDF.The SPARQL Query is executed on OWL file.Jena is a Java framework for building Semantic Web applications.It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine.Jena is a Java framework for manipulating ontologies defined in RDFS and OWL Lite [22].Jena is a leading Semantic Web toolkit [23] for Java programmers.Jena1 and Jena2 are released in 2000 and August 2003 respectively.The main contribution of Jena1 was the rich Model API.Around this API, Jena1 provided various tools, including I/O modules for: RDF/XML [24], [25], N3 [26], and N-triple [27]; and the query language RDQL [28].In response to these issues, Jena2 has a more decoupled architecture than Jena1.Jena2 provides inference support for both the RDF semantics [29] and the OWL semantics [30]. SPARQL is an RDF query language; its name is a recursive acronym that stands for SPARQL Protocol and RDF Query Language used to retrieve the information from the OWL.SPARQL can be used to express queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware.SPARQL contains capabilities for querying required and optional graph patterns along with their conjunctions and disjunctions.SPARQL also supports extensible value testing and constraining queries by source RDF graph.The results of SPARQL queries can be results sets or RDF graphs. A. Query processor A query processor executes the SPARQL Query and retrieves the matched results.The SPARQL Query Language for RDF [31] and the SPARQL Protocol for RDF [32] are increasingly used as a standardized query API for providing access to datasets on the public Web and within enterprise settings.The SPARQL query takes method parameters and the returns the results.The retrieved results contains project details like name of the project, version of the project and method details like name of the package, name of the class, method name , method return type, method parameter.Query processer takes the extracted method name and the method parameter as an input and retrieves the methods and project information from the OWL. VI. SOURCE RETRIEVER Source Retriever component retrieves the appropriate source code of the user selected method from the HDFS.It is the primary storage system used by Hadoop applications.http://ijacsa.thesai.org/HDFS creates multiple replicas of data blocks and distributes them on compute nodes throughout a cluster to enable reliable, extremely rapid computations.The source code file location of the Hadoop repository path is obtained from the OWL and retrieved from the HDFS by the copyToLocal(FromFilepath,localFilePath) method. QDox is a high speed, small footprint parser for extracting class/interface/method definitions from source files.When the java source file or folder that consists java source file loaded to QDox; it automatically performs the iteration.The loaded information is stored in the JavaBuilder object.From the java builder object the list of packages as an array of string are returned.This package list has to be looped to get the class information.From the class information the method information is extracted.It returns the array of JavaMethod.From this java method the information like scope of the method, name of method, return type of the method and parameter informations are extracted from the JavaMethod. QDox finds the methods from the source code.The file that is retrieved from the HDFS is stored in the local temporary file.This file is passed to the Qdox addSource() method for parsing.Through Qdox each method is retrieved one by one.The retrieved methods are compared with methods that the user requested for source code retrieval method.If it matches the source code is retrieved by getSourceCode() method.Then the temporary file is deleted after the process.In Hadoop repository files are organized in the same hierarchy of java folder.So it gets the source location from the OWL and retrieve the java source file to a temp file.The temporary file is loaded into QDox to identify methods.Each method is compared with method to be searched.If it matches; the source code of the method is retrieved by getMethodSourceCode() method. VII. CASE STUDY The input for the frame work is a UML class diagram.The sample class diagram is given below The entire process of the framework is given in the Table II.The Keyword Extractor for UML uses the class diagram and retrieves the method validateLogin(username:string).The output is given to the Method Extractor and generates the SPAQL query and extracts the matched methods which are listed in the Table III.From the list the appropriate method will be selected and the QDox retrieves the source code from the HDFS and displays the method definition of the selected methods as shown in the output of the Source Retriever in Table II.To test the performance of this framework the reusable OWL files are created by uploading the completed projects.The first OWL file is uploaded with first java project.The second OWL file is uploaded with first and the second java projects.The third OWL file is uploaded with first, second and third java projects.Similarly five OWL files are constructed.The purpose of creating OWL is to show how reusability increases when the knowledgebase grows.A sample new project is considered and it contains ten methods to be developed.The OWL files are listed with the number of packages, number of classes, number of methods and number of parameters.These methods are matches with the OWL files and the number of matches is listed in the Table IV.These data in the row of the Table IV shows that the number of matched methods.The reusability graph shown in the "Fig.4" shows that how the matches increases when the number of projects in the OWL grows.For the graph only five new method names are used instead of ten listed in the Table IV.The X-axis represents the OWL file numbers and the Yaxis represents the number of method matched for the new method legends.This progress shows that by uploading more projects in the knowledgebase can able to provide nearly hundred percent of the methods for reuse during software development. VIII. CONCLUSION The paper presents a framework to extract the method code components from the OWL using the UML design document.OWL is semantically much more expressive than needed for the results of our searching.With these sample tests the paper argues that it is indeed possible to extract code from OWL using the UML class diagram.The purpose of the paper is to achieve the code reusability for the software development.The OWL for the source code has already been created and this paper searches and extracts the code and components and reuses to shorten the software development life cycle.Before starting the coding phase of the development the framework helps the software development team to access the possibilities of how much code can be reused and how much code need to be developed.This assessment can help project manager to allot resources to the project and reduce cost, time and resource.The software companies can make use of this framework and develop the project quickly and grab the project at the lower cost among the competitors. After developing OWL Ontology and storing the source code in the HDFS, the code components can be reused.This paper has taken design document from the user as input, then extracted the method signature and try to search and match in the OWL.The knowledgebase gets uploaded with more and more projects the reuse rate is also higher.The future work can take the SRS as input; text mining can be performed to extract the keywords as classes and the process as methods.The SRS artifact is much earlier phase than the UML.So considerable amount of time can be reduced than using UML as input.The method prototype can be used to search and match with the OWL and the required method definition can be retrieved from the HDFS.The purpose of storing the metadata in OWL is to minimize the factors like time of development, time of testing, time of deployment and developers.By creating OWL using this framework can reduce these factors. Figure 1 . Figure 1.Process of Source Retrieverretrieves the matched methods from the repository.Method Retriever constructs SPARQL query to retrieve the matched results.The user should select the appropriate method from the list of methods and retrieve the source code by Source Retriever component which interacts with HDFS and displays the source code.IV.KEYWORD EXTRACTOR FOR UMLUnified Modeling Language (UML) is a visual language for specifying, constructing, and documenting the artifacts of systems.It is a standardized general-purpose modeling language in the field of software engineering.To create UML class diagram Umberllo UML Modular open source tool is used.The diagram is stored in XMI format.Umbrello UML Modeller is a Unified Modeling Language diagram program for KDE.UML allows the user to create diagrams of software and other systems in a standard format.Umbrello It can support in the software development process especially during the analysis and design phases of this process.UML is the diagramming language used to describing such models.Software ideas can be represented in UML using different types of diagrams.Umbrello UML Modeller 1.2 supports Class Diagram, Sequence Diagram, Collaboration Diagram, Use Figure 2 . Figure 2. Process of Keyword Extractor for UML Figure 3. Method Retriever Process TABLE I . TAGS USED TO EXTRACT METADATA FROM XMI FILE It holds the informations of the class attributes like name of the attributes, type of the attribute, and visibility of the attribute etc., UML:Operation It holds the methods information of the class like name of the method, return type of the methods, visibility of the method. like name of the class, visibility of the class ,etc., http://ijacsa.thesai.org/UML:Attribute Attribute is a sub tag of class. TABLE II . PROCESS FLOW OF THE FRAMEWORK TABLE III . METHOD RETRIEVER OUTPUT
5,170.2
2011-01-01T00:00:00.000
[ "Computer Science" ]
Current Status of Outdoor Lifetime Testing of Organic Photovoltaics Abstract Performance degradation is one of the key obstacles limiting the commercial application of organic photovoltaic (OPV) devices. The assessment of OPV stability and lifetime are usually based on simulated degradation experiments conducted under indoor conditions, whereas photovoltaic devices experience different environmental conditions under outdoor operation. Besides the intrinsic degradation of OPV devices due to the evolution of optoelectronic and morphological structure during long‐term operation, outdoor environmental changes can impose extra stresses and accelerate the degradation of OPV modules. Although outdoor studies on long‐term OPV stability are restricted by the long data collection times, they provide direct information on OPV stability under mixed degradation stresses and are therefore invaluable from the point of view of both research and practical application. Here, an overview of the current status of outdoor lifetime studies of OPVs is provided. After a summary of device lifetime extrapolated from indoor studies, outdoor lifetime testing platforms are introduced and the operational lifetime of various OPV devices are reviewed. The influence of climate and weather parameters on device performance and burn‐in phenomena observed during the degradation of OPVs is then discussed. Finally, an outlook and directions for future research in this field are suggested. Introduction Organic photovoltaic (OPV) devices are a candidate for next generation photovoltaic (PV) applications because they can be solution-processed on light-weight, flexible substrates over large areas: [1] a property that could greatly decrease manufacturing cost and permit new applications such as wearable devices. OPVs also have the potential for shorter energy payback times compared to many other PV technologies as a result of lower embodied energy in the solution-based deposition techniques that are expected as part of their manufacture. [2] The past decade has witnessed a rapid improvement in OPV efficiency. Through the combined effort of chemical design and synthesis, new polymer donors and nonfullerene organic semiconductor acceptor materials have emerged and enabled numerous photovoltaic blend systems to achieve power conversion efficiencies (PCE) in excess of 10%; [3] a level considered as a milestone for commercialization. However, high efficiency is not the only requirement for commercialization; rather extended operational stability also must be demonstrated. For silicon based PVs (the technology that presently dominates the PV market), operational stabilities of 20 years can be achieved. [4] For OPVs, it has been estimated that a lifetime of at least 10 years must be demonstrated to render such devices financially competitive; a level of stability that currently remains challenging. The degradation of OPV device performance has been widely observed, however the volume of research undertaken to study this process is substantially less than that devoted to the development of new materials or processing studies undertaken to engineer an enhancement in PCE. [5] Known degradation mechanisms include photo-and water-induced chemical reactions within the active layer, the degradation of device electrodes, the instability of hole and electron transport layers and a failure of device encapsulation. A detailed discussion of device degradation mechanisms can be found in a number of comprehensive reviews. [5b,6] Compared to outdoor studies, lifetime studies conducted under indoor conditions combine the advantages of reduced data collection time together with well-controlled and welldefined environmental conditions. However the degradation pathways that exist during indoor studies are usually fixed Performance degradation is one of the key obstacles limiting the commercial application of organic photovoltaic (OPV) devices. The assessment of OPV stability and lifetime are usually based on simulated degradation experiments conducted under indoor conditions, whereas photovoltaic devices experience different environmental conditions under outdoor operation. Besides the intrinsic degradation of OPV devices due to the evolution of optoelectronic and morphological structure during long-term operation, outdoor environmental changes can impose extra stresses and accelerate the degradation of OPV modules. Although outdoor studies on long-term OPV stability are restricted by the long data collection times, they provide direct information on OPV stability under mixed degradation stresses and are therefore invaluable from the point of view of both research and practical application. Here, an overview of the current status of outdoor lifetime studies of OPVs is provided. After a summary of device lifetime extrapolated from indoor studies, outdoor lifetime testing platforms are introduced and the operational lifetime of various OPV devices are reviewed. The influence of climate and weather parameters on device performance and burn-in phenomena observed during the degradation of OPVs is then discussed. Finally, an outlook and directions for future research in this field are suggested. OPV Lifetime Extrapolated from Indoor Lifetime Studies Outdoor real-world lifetime studies of OPVs are time consuming and require a comprehensive testing platform. Because of this, the lifetime of OPV devices is usually extrapolated from indoor degradation tests that are run under accelerated conditions. [7] Before 2011, there were no specific standards for OPV lifetime testing, and thus the results reported before then cannot be fully compared due to differences in data collection, analysis and presentation methods. At that time, the standards used in some OPV lifetime research were based on protocols developed by the International Electrotechnical Committee (IEC) for the characterization of amorphous silicon PVs. Here the most commonly used standard is known as IEC61646 which comprises a series of degradation tests, including a 1000 h damp heat (DH) test at 85 °C and 85% humidity, 200 cycles of thermal cycling (TC) from −40 to +85 °C, and a sequence test consisting of UV exposure, 50 cycles of TC, and 10 cycles of humidity freeze (HF) from −40 to +85 °C at 85% humidity. After finishing each test, modules are then characterized to determine device efficiency. The feasibility of applying the IEC61646 standard to OPV lifetime testing has been explored. For example, Yan et al. [8] characterized the stability of semitransparent OPV modules based on P3HT:PCBM following the IEC61646 standard. They found that modules with an initial efficiency of around 3% underwent an efficiency loss of 8% for modules encapsulated using a flexible barrier and 4% for laminated glass encapsulation by the end of the test period. However as the IEC61646 standard was established for amorphous silicon thin film solar cells; there are concerns regarding its application to OPVs; as the degradation mechanisms active in silicon based photovoltaics and OPVs are unlikely to be the same. For this reason an International Summit on Organic solar cell Stability (ISOS) was held in 2011 and discussed issues relating to the reliability and repeatability of OPV lifetime studies. Following this, recommendations for OPV stability tests were established based on the consensus of a large number of research groups that now provide standards for the study of OPV stability, allowing a direct and more reliable comparison to be made between different research studies. [9] OPV lifetime testing conducted under laboratory conditions can be divided into several conditions, with devices being (1985)(1986)(1987)(1988) and Ph. D (1990-1995) degrees in Physics at the University of Birmingham, UK. In 1995, he moved to the University of Sheffield, UK, to undertake postdoctoral research in polymer electronics. His current research interests include photovoltaic devices based on organic semiconductors and perovskites and photonic structures and devices based on organic and hybrid semiconductors. subjected to dark storage, laboratory-weathering, thermalcycling and solar-thermal-humidity cycling. For each test, three levels are defined according to the requirements for measurement facilities and accuracy, as shown in Table 1. Dark storage and laboratory weathering tests are two widely used long-term lifetime tests that are conducted indoors. Thermal cycling and solar-thermal-humidity are rarely applied due to the relative complexity of the tests as well as the short lifetime of most OPVs under such harsh conditions. In dark storage tests, OPV devices are simply stored in the dark, with the exposure to atmospheric oxygen and moisture being the main degradation processes. According to ISOS test protocols, devices can be exposed to ambient or elevated temperatures and humidity, with tests corresponding to ISOS-D-1, ISOS-D-2, and ISOS-D-3 tests as described in Table 1. Angmo and Krebs [10] fabricated large area, ITO-free P3HT:PCBM OPV devices using roll-to-roll techniques and investigated longterm dark-storage lifetime following the ISOS-D-2 standard. It was found that OPV modules retained more than 80% of their initial efficiency after more than 2 years dark-storage, with the efficiency loss being mostly attributed to degradation at the electrode contacts. Although the initial efficiency of the above P3HT:PCBM modules was relatively low (PCE of 1.06%), such results are very encouraging considering that the modules were fabricated using scalable techniques and indicate a promising stability of the organic photo-active layer against atmospheric oxygen at elevated temperatures and low humidity. Fullerene and nonfullerene based OPVs with higher initial efficiencies have also been tested employing ISOS-D standards (see Table 2). In recent years, following the rapid development of perovskite solar cells (PSCs), ISOS-D standards have also been applied to investigate the stability of such devices. [11] Generally, dark storage lifetime studies are employed to determine the stability of OPV devices when exposed to air with or without extra thermal or moisture stresses. Since photo-induced chemical reactions do not occur during dark storage, degradation under this type of test is usually attributed to the ingress of oxygen and water into the device; a process that often results in the failure of the device contact or degradation of the photoactive layer. Such degradation mechanisms also occur under outdoor conditions and thus indoor testing provides important information regarding device stability, despite its inability to provide a precise measure of OPV stability under real-world conditions. Another commonly used laboratory method to predict OPV lifetime is exposing devices to a constant irradiance, known as laboratory-weathering tests. It is generally found that device lifetimes measured under dark storage are much longer than those measured when devices are irradiated. For some photosensitive organic semiconductors, e.g., PBDTTT-EFT, [12] device Adv. Sci. 2018, 5, 1800434 Table 1. Summary of lifetime testing types and conditions. Adapted with permission. [9] Copyright 2011, Elsevier. A typical schematic of OPV efficiency as a function of time is shown in Figure 1. Here, it can be seen that the device efficiency initially degrades rapidly under illumination. [6b] At a later point, this degradation rate slows and becomes more approximately linear. This initial, rapid degradation-period is termed "burn-in." [13] The lifetime of OPV devices are characterized by the lifetime parameter Ts80, which is extracted from the time point when the efficiency drops to 80% of its value at the end point of the burn-in period. The end of the burn-in process is defined as the end of the initial fast exponential decay or the start-point of linear degradation. Admittedly, in some cases the accurate determination of this point is not straightforward; however, in long-term lifetime studies, the inaccuracy introduced by this uncertainty is relatively small. Sometimes the lifetime parameter T80 is quoted which is defined as the time over which the efficiency decays to 80% of its initial value. Clearly, Ts80 is longer than T80, as sometimes more than 20% of initial efficiency is lost during the burn-in period. In many cases the T80 lifetime can be relatively short, however this does not necessarily result in a short Ts80. The lifetime of an OPV module can be estimated by calculating the energy dose received by a module under indoor conditions. This is then converted to an equivalent energy dose that would be received from the sun under outdoor conditions. Peters et al. [14] compared the stability of P3HT and PCDTBT based OPV devices held at their maximum power point exposed to a constant irradiance of 100 mW cm −2 (±4%) and a temperature of 37 °C (held using a water heated copper plate) over a period of 4400 h. For both types of device, a clear burn-in period was observed lasting around 1300 h. Using a linear fit, a Ts80 lifetime of more than 12 000 h was extrapolated for PCDTBT based devices. It was also found that a clear determination of the end of the burn-in period was critical in extrapolating the Ts80 lifetime. In theory, the end point of the burn-in process should correspond to the turning point of the slope in the degradation curve after which efficiency degrades in a linear manner. However, identifying this point is subjective and a consensus should be established and applied to precisely define this end point. Indeed, by changing the end of the burn-in process, the extrapolated Ts80 lifetime of P3HT based devices was varied from 5000 to 7000 h. Under the assumption that a PV device positioned outdoors would be exposed to an average irradiance level of one sun for 5.5 h day −1 , a lifetime of 6.2 years and between 2.5 and 3.8 years was predicted for PCDTBT and P3HT based OPV devices respectively. Despite the relatively large errors that are associated with such extrapolations, a predicted lifetime of 6.2 years is an encouraging level of OPV stability. Furthermore, by minimizing oxygen and water exposure during the test conditions, Mateker et al. [15] observed that OPVs could operate with minimal intrinsic degradation for thousands of hours, with extrapolated lifetimes extending beyond 15 years. The lifetime of several OPVs tested under the ISOS-L standards is presented in Table 2. The references in Table 2 also show that optical-radiation energy dose received by the OPV device is an important parameter in determining device lifetime. In some reports, device stability has not been estimated based on a single test, rather researchers have used a series of protocols to investigate the degradation of OPV devices. This raised the question of how to compare the lifetime data acquired under different protocols. Gevorgyan et al. [28] established an "o-diagram" method to present stability data in order to compare the lifetime determined under different testing methods and performed in different laboratories. This is shown in Figure 2, where the Y-axis of the o-diagram represents the initial efficiency of an OPV module (either initial efficiency or efficiency just after the burn-in process) and the X-axis represents device lifetime plotted on a logarithmic scale. A second time-scale presented at the top of the diagram divides time into hours, days, weeks etc. This presentation method is an effective way to compare device lifetimes obtained under different test protocols. Recently, Kettle et al. [29] established a lifetime testing model to obtain an acceleration factor for each of the ISOS standards that is defined as the ratio between device lifetime measured under accelerated and real world conditions. For acceleration factors less than 1.0, indoor-tested devices degrade more slowly than those positioned outdoors. For factors greater than 1.0, indoor device degradation is accelerated compared to that determined under outdoor tests. In this study, it was concluded that the ISOS-D-1 testing condition resulted in an acceleration factor of 0.45. However with an increased temperature (ISOS-D-2) or an increased temperature and humidity (ISOS-D-3), the acceleration factor increased to 2.00 and 12.11 respectively. This suggested that elevated temperature and humidity significantly accelerates device degradation. Degradation under illumination was found to be generally faster than that determined under dark storage. Tests under the condition of ISOS-L-2 revealed an acceleration factor of 15.70. With the humidity elevated to 50%, the ISOS-L-3 condition resulted in an even larger acceleration factor of 24.70. Note such measurements were based on the outdoor conditions prevalent in Bangor, North Wales. The time required for different indoor lifetime testing protocols to simulate a one-year outdoor degradation process is presented in Table 3. This work allowed lifetime data collected indoors under different ISOS standards to be related to expected lifetime under outdoor conditions. However, this model is clearly dependent on local climate conditions in North Wales and cannot provide a universal model to transfer indoor lifetime data to outdoor results. Indeed, due to the large variations in real-world conditions, the establishment of a general model is not trivial. However, one possible solution is to determine a coefficient for each parameter; this will clearly require international coordination and collaboration together with considerable financial investment. Outdoor Lifetime Tests Considering the difficulties in simulating outdoor real-world conditions for OPV lifetime tests, a number of researchers have explored moving such tests directly to outdoors. Indeed, outdoor lifetime tests are also included in the ISOS standard, as shown in Table 4. Test Platforms Used in OPV Outdoor Lifetime Study To study OPV degradation outdoors, it is necessary to build a reliable testing platform. Such studies have been pioneered by F.C. Krebs and his colleagues, who have made strong progress in this area. In 2006, [30] they reported the operational stability of OPVs based on three photovoltaic blends composed of the materials MEH-PPV:PCBM, P3HT:PCBM, and P3CT:C60 in Israel (30.9°N). The equipment used was relatively simple, with a thermopile pyranometer and a thermocouple mounted with the OPVs under test in a solar tracker (see Figure 3). The measurements were carried out in the daytime (from 9 a.m. to 5 p.m.), with devices stored in a nitrogen-filled glovebox between tests. This periodic interruption meant the study was not comparable with subsequent outdoor lifetime studies, however the test protocol fulfilled other requirements of the ISOS-O-1 standard. Although the test only lasted for a month, it is still of great importance as it represents the first attempt to test OPV lifetime under real-world conditions. Adv. Sci. 2018, 5, 1800434 Figure 2. An "o-diagram" displaying device lifetime obtained from different testing protocols. Reproduced with permission. [28] Copyright 2014, Elsevier. In 2008, researchers from Konarka Inc. [31] established a more advanced outdoor lifetime testing platform in Lowell, USA (42.6°N) which was used to investigated the lifetime of flexible P3HT:PCBM OPV modules under outdoor conditions. The testing platform was located on a rooftop without any shade and faced south to maximize the solar irradiance. During the test, the OPV modules were kept under load conditions, and were connected to a resistor to make sure they operated at the initial maximum power point. The device outdoor lifetime performance was found to be promising, with no serious loss in performance determined after over 1 year's outdoor exposure. However, the maximum power point was found to shift and thereby induce a nonoptimal loading of the OPVs during testing. One important question raised by this study is the nature of the optimum load condition required for long-term testing, and whether it is better to keep device under open circuit between the J-V measurements. Here, the setup fulfilled all requirements of ISOS-O-2 although it was reported prior to the establishment of the ISOS standards. After the establishment of the ISOS standards, Krebs and co-workers built a test platform located in Roskilde, Denmark (55.6°N). As shown in Figure 4, the OPV modules tested were mounted on a solar tracker and connected with an automated system used to record a J-V curve every 10 min (and held at open circuit between measurements). Along with the device metrics, the system recorded environmental parameters including temperature and irradiance level. The OPV modules were intermittently dismounted from the platform and tested under a solar simulator to fulfill the requirement of ISOS-O-3. Their collaborators also built outdoor lifetime testing platforms in India, the Netherlands, Germany, Australia and Israel, which were simplified versions of the system in Denmark while still fulfilling ISOS-O-2 standards. Another outdoor lifetime testing platform was built in Sheffield, England (53.4°N). [32] This system used a rigid sample chamber that provided an extra level of protection to the OPV modules (see Figure 5). During operation, each sample chamber was filled with nitrogen at a slight overpressure to maintain devices in an inert atmosphere; a feature that made it possible to test OPV modules having relatively basic levels of encapsulation. The J-V curves were recorded at an interval of ≈5 min, with temperature and irradiance measured simultaneously. The sample chambers were held at an angle of 30° to the horizon and pointed south to maximize the solar flux incident upon the OPVs. Because of the use of the sample chamber however, this platform does not fulfill the requirement of ISOS-O as the devices are no longer directly exposed to air or moisture. This is because the chamber does not form part of the device and cannot be considered as extra encapsulation. Nevertheless, it does allow long-term comparison to be made between different organic-semiconductor devices that have imperfect encapsulation. As the climate and geographical conditions significantly influence the performance and degradation of OPVs, it is useful to compare degradation of OPV modules located in different regions to explore the effect of climate on their long-term outdoor stability. Krebs et al. [33] conducted interlaboratory experiments by comparing outdoor lifetime data, however the systems used by different groups were not identical. Although the experiments were all designed to follow the ISOS-O standard, small errors caused by the different setups cannot be ignored. To make outdoor lifetime studies easier and to increase the comparability of outdoor lifetime tests conducted by different groups, a standard testing platform is required. Krebs and coworkers [34] later designed a packaged outdoor OPV test suitcase, which served as both sample transportation and as a sample holder for outdoor testing. As shown in Figure 6, the samples were mounted onto the outer surface of the suitcase, with the mini-platform being fixed at a certain angle to optimize the absorption of the incident sunlight. The suitcase also provided the necessary electronics to determine open circuit voltage (V oc ) and short circuit current (J sc ). The development of this suitcase enabled comparable outdoor test experiments to be performed by most research laboratories and increased participation in the "OPV outdoor testing consortium". Reproduced with permission. [10] Copyright 2015, Wiley-VCH. In summary, a number of successful long-term outdoor lifetime testing platforms have been developed, however, a universal, cost-efficient setup is still needed. The general requirements for such a platform include methods to automatically and continuously record J-V sweeps, temperature and irradiance level. Such systems are also portable and sufficiently inexpensive to be accessible to research groups having a limited budget. The establishment of such a test platform would require concerted action from the whole OPV research community. Status of Long-Term Outdoor Lifetime Testing According to the database of ISI web of knowledge, a search including the key words "organic/polymer solar cells/ photovoltaics," returns more than 14 000 hits. However, when the key word "outdoor" is added to the search, only around 150 hits are found. Furthermore, the majority of OPV outdoor lifetime studies only last for a few hundred hours, with long-term outdoor lifetime tracking studies being relatively rare. www.advancedscience.com Although short-term outdoor lifetime testing cannot be used to extrapolate the long-term lifetime of OPV modules, it is an effective tool to compare the influence of different designs on OPV stability. For instance, Teran-Escobar et al. [35] tested P3HT:PCBM based solar cells under outdoor conditions for a period of 1000 h in Barcelona, Spain. It was found the devices using a V 2 O 5 ·0.5H 2 O HTL had good stability in outdoor conditions, with the use of a UV filter being beneficial in improving device stability (UV irradiance can induce photoreactions and thereby reduce device performance). A similar study was conducted by the same group, [36] where an outdoor lifetime study was conducted for 900 h following the ISOS-O-2 standard. Here, it was found that the use of an aqueous solution-processed V 2 O 5 hole transport layer could improve P3HT:PCBM based OPV module lifetime, with devices still retaining more than 80% of their initial efficiency after 900 h of continuous testing. Josey et al. [37] tested the outdoor stability of some fullerene-free OPV devices over around 40 days and concluded that the chemical structure of the acceptor molecule had significant impact on device stability. Due to the restriction of the testing platform used, the samples were only exposed to outdoor conditions for 6 h day −1 and were returned to indoor conditions for dark storage at night, and thus this study cannot be directly compared with other work. Most outdoor studies have been conducted by Krebs and coworkers, with a particular focus on P3HT:PCBM based solar modules fabricated by roll-to-roll processing methods. Their outdoor lifetime studies have been performed in different counties including Denmark, India, Holland, Germany, Israel, and Australia. The details of their results are presented in Table 5. Other groups have also reported long-term outdoor lifetime studies of OPV devices in different locations. For example, Emmott et al. [38] studied the off-grid stability of OPV modules in outdoor conditions in Rwanda, Africa. The outdoor stability in Africa-where the UV levels and ambient temperature are much higher than Europewas determined to be between 2.5 and 5 months; a value smaller than that of the same module tested in Europe. The failure of the encapsulation was identified as the main cause of the degradation. Krebs and co-workers have also explored OPV module lifetime in a greenhouse [39] and found that module lifetime was enhanced slightly; a result that suggests possible new applications for OPVs. The lifetime of OPV devices is significantly affected by the quality of the encapsulation; [40] this is especially true in outdoor applications as devices are exposed to a range of stresses including irradiance, thermal cycles, wind, rain, snow, and high moisture-levels. [10] It has been shown that unencapsulated devices have operational lifetimes that are several magnitudes lower than encapsulated ones. [5c] Although the importance of encapsulation has been well established, the packaging of OPV modules is around 60% part of their total cost. [41] The development of secure, inexpensive and effective encapsulation packages remains a real challenge. Weerasinghe et al. [42] developed an encapsulation strategy based on commercial available barrier films and adhesives and used this to package fully printed OPV modules that showed limited efficiency loss after 13 months outdoor operational testing. The modules experienced harsh weather conditions during outdoors testing, including ambient temperatures ranging from −1 to 45 °C, heavy rain and hailstorms. Control, nonencapsulated modules were found to be completely nonfunctioning within 48 h of outdoor exposure even without being exposed to any "extreme" weather. The study clearly shows that the intrinsic stability of all-printed OPV modules is highly promising and provides significant motivation to develop more effective and cheap encapsulation techniques that can be used to protect large-area and flexible OPV modules. As can be seen from Table 5, OPVs tested outdoors have demonstrated lifetimes exceeding 2 years provided they are effectively encapsulated. However, outdoor lifetime tests conducted over longer times periods are still required. Most reported long-term OPV outdoor lifetime tests are based on devices containing an active layer composed of P3HT:PCBM, a material system that is known to have high intrinsic stability. Progress has been made in the development of flexible OPV modules having promising stability when tested under outdoor conditions. [10] Here, the concept of the water vapor transmission rate (WVTR) is of key importance. This parameter is used to characterize the amount of water vapor that passes through a layered material over a set time period and has units of g m − ² day −1 . [47] We note that it has been proved challenging to develop long-lived flexible organic LEDs for display applications. [47] This suggests that Adv. Sci. 2018, 5, 1800434 a less demanding WVTR is required for OPV applications as compared to OLEDs (see discussion in Section 4.3). It has been argued that low OPV module efficiency is not an obstacle for commercialization providing that devices cover a sufficiently large area and that manufacture cost is sufficiently low. [48] However, high power conversion efficiency is always desirable as this will reduce the energy payback time. OPV modules have been fabricated using D-A polymer:fullerene systems having much higher PCE. [49] Indeed, the authors of this review have used two such materials and have performed outdoor lifetime studies, with device lifetimes demonstrated between 6200 and 10 000 h. [47,61] More efficient donor materials and nonfullerene acceptor materials have advanced the PCE of OPV devices to more than 10%, however, most of the stability research on these materials is still limited to laboratory conditions. [50] More work is needed to move the stability testing to outdoor conditions. The adoption of ISOS-O standards clearly results in compatibility between tests conducted by different research groups. Although such ISOS-O standards are detailed, Gevorgyan et al. [33b] made a series of further suggestions and supplements to such measurements that we summarize here: (1) To ensure the reproducibility and reliability of the lifetime data, at least 5 identical devices should be measured under the same conditions. (2) The environmental conditions including temperature, humidity, and irradiance level should be monitored and recorded along with OPV device metrics. (3) The cumulative energy dose received by the samples should be calculated over the whole test period. (4) Samples should be periodically taken back to laboratories and tested under well-defined indoor conditions (at least once a month is recommended). This is especially necessary in winter or in rainy seasons when irradiation is limited. However, mechanical and electrical stresses during such indoor tests should be carefully controlled and minimized. (5) As the irradiance level has great influence on the device efficiency, the data collected should be screened according to specific irradiance level range. The J sc should be normalized to the irradiance level to make a fair comparison. (6) If possible, the efficiency and temperature coefficient of the device should be established and the PCE should be corrected according to this coefficient. (7) A direct link between the ISOS-L and ISOS-O lifetime tests should be established via the cumulative energy dose received by the devices, [46] allowing a comparison to be made between indoor and outdoor lifetime data. Another effective way to compare indoor and outdoor lifetime data is through "o-diagram" as described by Gevorgyan et al. [43] Outdoor Factors Influencing OPV Device Stability The environment is a dynamic system, with temperature, humidity and irradiance levels all changing simultaneously over time and over seasons. In the following sections, we discuss how these factors influence OPV lifetime. Temperature The efficiency of OPVs is strongly dependent on temperature, as charge transport in organic semiconductors occurs through a thermally-assisted hopping process [51] and thus short circuit current (J sc ) usually increases with elevated temperature. The open circuit voltage (V oc ) decreases slightly with increased temperature, [52] which can be expressed using the following equation Here, Δ is related to disorder resulting from the solution processed and phase separated polymer and fullerene regions, n e and n h are the electron and hole densities in the acceptor and donor domains at open circuit, and N c is the density of conduction states (DOS) at the band edge of the acceptor and donor. The overall device efficiency most often increases due to a stronger positive correlation of J sc with temperature. It has been shown that the efficiency of ITO/PEDOT:PSS/ OC 1 C 10 -PPV:PCBM/Al OPV devices increases from below 0.8% at 250 K to 1.9% at 320 K as shown in Figure 7. [53] The same phenomenon has been reported in OPVs employing MDMO-PPV:PCBM as the photoactive layer. [54] However recent studies based on tracking the diurnal performance of small-molecule planar-mixed heterojunction DBP:C 70 OPV devices in outdoor conditions suggested that the positive temperature coefficient resulted from spectral broadening of the absorption caused by enhanced electron-phonon coupling at elevated temperatures which increased J sc . [55] Practically, it is important to understand the effect of temperature up to around 60 °C as this covers the temperature range encountered in most real-world situations. Over the course of a single day, variations in temperature can significantly affect device efficiency and thus a temperature coefficient can be determined to minimize efficiency fluctuations induced by changing temperature. [56] As can be seen in Adv. Sci. 2018, 5, 1800434 Figure 7. Device PCE as a function of temperature under different irradiance levels. Reproduced with permission. [53] Copyright 2004, Wiley-VCH. Figure 8, device efficiency has a positive coefficient with temperature when measured under outdoor conditions. However, such temperature coefficients are largely dependent on the composition of the active layer and the device architecture, and such a temperature coefficient must be independently established for each type of device. Unfortunately, device efficiency is not routinely corrected for the effect of temperature in most reported outdoor lifetime studies. The ambient temperature also affects OPV lifetime. As described previously, OPV device degradation is accelerated by elevated temperature; a process reflected by Equation (2) [57] AF Here AF is the acceleration factor that occurs as a result of increased temperature and irradiance level, E a is the activation energy of the degradation process and k B is Boltzmann constant with T 1 (I 1 ) and T 2 (I 2 ) being temperature (irradiance level) under testing conditions (1) and (2) respectively. This simplistic model makes the following assumptions: 1) the activation energy E a value over the temperature range is constant, 2) the rate of degradation depends linearly on irradiance, and 3) the spectral composition (especially UV content) of the radiation is unchanged at different irradiance levels. [58] Aging tests on P3HT:PCBM solar cells have confirmed the validity of this relationship and have established an acceleration factor of 4.45 over a storage temperature range from 298 to 333K. [59] However, under outdoor conditions with the presence of irradiance, photooxidation is the dominant degradation mechanism rather than thermally induced oxidation, and thus the influence of temperature will mainly occur via its effect on the rate of photochemical reaction. [60] In recent years, the emergence of nonfullerene acceptor materials has increased the PCE of bulk heterojunction OPV devices. [3h,61] Besides the high efficiency, another advantage of fullerene-free OPV devices is excellent thermal stability. OPV devices using an unfused-core based nonfullerene acceptor, DF-PCIC, realized a PCE of 10.2%, and more importantly, after thermal treatment at 180 °C for over 12 h the devices retained ≈70% of their original efficiency. [62] Similarly, OPV devices based on ITIC, another nonfullerene acceptor small molecule also showed excellent thermal stability. [63] Under thermal stress of 100 °C for 100 h, no obvious efficiency loss was observed. Due to the strong tendency of fullerene derivatives to form large aggregates at high temperatures, [58b] OPV devices using fullerene acceptors generally have poor thermal stability. Replacing the fullerene acceptor by nonfullerene acceptors can avoid the morphological instability caused by fullerene aggregation at high temperature and so result in improved thermal stability. We note that in outdoor conditions (especially in some tropical regions), high stability at elevated temperature is essential. Replacing fullerene acceptors by nonfullerene molecule is therefore a promising strategy to extend device lifetime, although a detailed investigation of the stability of such materials to other degradation mechanisms is still needed. Irradiance Level The irradiance level both affects device metrics and also accelerates the device degradation rate. Ideally, the normalized J sc and fill factor (FF) should be constant as a function of irradiance level as charge generation is proportional to the light intensity. Under open circuit condition, all photogenerated charge carriers recombine within the device. Thus, the recombination mechanisms can largely determine V oc of OPVs. As shown in Figure 9c, [64] V oc varies logarithmically with illumination intensity, with its slope being equal to kT/e. From Equation (1), it can be seen that V oc is particularly susceptible to the density of states (DOS) of the acceptor LUMO and donor HOMO. The DOS in the band tails is dependent on the illumination intensity as such states can be occupied by photoexcited electrons (in the acceptor) and holes (in the donor). At temperatures above zero, the quasi-Fermi energies move into the gap thereby reducing the V oc . Based on the above discussion, the overall device efficiency will increase with increasing irradiance intensity; a process that is observed in silicon-based solar cells. In an OPV however, charge carriers are generated through the processes of photon absorption, exciton diffusion, and separation followed by charge extraction. A higher irradiance level normally results in a higher exciton generation rate, although not all generated excitons undergo separation, as some fraction are lost through monomolecular or bimolecular recombination. [65] The short circuit current is linearly proportional to the irradiance level, however carrier-traps in the active layer significantly influence the dependence of J sc on the irradiance level. At a high light intensity, more traps become populated, resulting in reduced recombination and superlinear increase of the photocurrent. [66] The open circuit voltage is expected to be proportional to the light intensity over the temperature range 280 to 320 K, [54] a temperature that coincides with most outdoor conditions. It is also found that the parallel resistance of OPVs decreases by almost three orders of magnitude as the irradiance level is increased from 0.03 to 100 mW cm −2 . However the overall device efficiency decreases slightly with increased irradiance level due to the negative effect of decreased parallel resistance. Similar results were observed on OPV devices based on a squaraine dye, [67] with PCE increasing from 4.3% at 100 mW cm −2 to 6.2% at 3.5 mW cm −2 because of increased FF. It was believed that at a lower irradiance level, recombination was suppressed due to a lower charge carrier density in the device. A collection-limited theory also confirmed the dependence of device efficiency on irradiance level, as shown in Figure 9. [64] Here, it was found that the space-charge density increased with increasing irradiance level. This increase in space charge with increasing illumination intensity pointed to a filling of deeplevel charge-traps present in the material. These filled deeplevel traps can screen the electric field and thus reduce the charge extraction efficiency. It is worth noting that under outdoor conditions, higher irradiance levels usually correspond to higher temperatures, an issue that makes it difficult to distinguish between codependent factors. The effect of irradiance on device performance under outdoor conditions was investigated by Bristow et al. [56] Here, it was found that at low irradiance, device efficiency was much lower than expected and only reached a maximum at 600 mW cm −2 , with a clear inflexion characteristic observed in the J-V curve. It was speculated that there was poor carrier transport through one of the layers or interfaces that prevented efficient charge-extraction from the device. This study clearly illustrates the complexity of outdoor testing of OPV devices, with unexpected results sometimes emerging due to the combined effects of a number of environmental factors. Data collection times in OPV lifetime tests can be shortened by exposing devices to concentrated illumination. In order to investigate the intrinsic degradation mechanisms of organic semiconductor materials (rather than complete devices), Tromholt et al. [68] studied the degradation of P3HT and MEH-PPV at varied irradiance levels (between 20 and 100 W cm −2 ). Here the total absorption was recorded using UV-visible spectroscopy as a function of exposure time at different illumination levels. As shown in Figure 10, it was found that when exposed to concentrated illumination, the degradation of both polymers was accelerated, with the acceleration factor being almost linear with irradiance level. Although the active layer is the most sensitive part of an OPV device, the degradation of electron and hole transport layers, the device-electrodes and interfaces also need to be considered. For example, Tromholt et al. [69] investigated the degradation of OPV devices based on a P3HT:PCBM blend as active layer and found that device efficiency dropped to 6% of its original value after exposing the device at a constant irradiance of 500 mW cm −2 for 30 min. This degradation was attributed to the desorption of oxygen from the zinc oxide electron transport layer during illumination. The study indicates therefore that sensitivity to other materials within the device is critical to engineer enhanced operational stability, and that performance at high irradiance level can reveal degradation mechanisms that are not observed under normal irradiance conditions. Indeed, under outdoor conditions, the irradiance level seldom reaches values as high as 150 mW cm −2 , with the average irradiance level being much Adv. Sci. 2018, 5, 1800434 Figure 9. Irradiance-dependent performance of an OPV device as a function of irradiance level. All performance metrics are normalized to values determined at an intensity of 100 W cm −2 . Dotted lines correspond to results from the self-consistent numerical simulations for typical inorganic solar cells. Reproduced with permission. [64] Copyright 2015, National Academy of Sciences of the United States of America. less than 1 sun. Degradation mechanisms that only occur at high irradiance level are therefore of secondary importance in outdoor lifetime tests. Humidity Moisture is a key degradation factor for OPVs. Glen et al. [70] found that moisture plays an important role in the degradation of OPV devices incorporating PEDOT:PSS/ITO and Ca/Al electrodes, with devices exposed to humid air degrading more rapidly than those exposed to dry air. Water was shown to cause the formation of bubbles and voids within the device. It was also concluded that water ingress mainly occurred via the edge of the device rather than through pinholes or defects in the aluminum electrode. This finding emphasized the need for effective encapsulation at the edges of an OPV module. Devices incorporating a PEDOT:PSS layer are believed to be more vulnerable to the effects of moisture because of its hygroscopic nature. Voroshazi et al. [71] investigated the degradation of P3HT:PCBM based OPV devices incorporating either PEDOT:PSS or MoO 3 hole transport layer, with the results revealing that moisture induces significant degradation in devices containing a PEDOT:PSS layer. Devices that incorporated a MoO 3 hole transport layer however appeared relatively stable even in atmosphere containing moisture (see Figure 11). Similar results were reported by Sun et al. [72] who explored PCDTBT:PC 70 BM based OPV devices and found that by replacing the PEDOT:PSS hole transport layer with MoO x , it was possible to significantly increase the device air storage stability. Here, devices incorporating a MoO x hole transport layer retained 50% of their original efficiency after 720 h air storage without encapsulation. The efficiency of control devices incorporating a PEDOT:PSS hole transport layer instead degraded more rapidly, retaining less than 10% of their original value after air storage for 480 h. However for encapsulated PCDTBT:PC 70 BM based OPV devices, Bovill et al. [24] reported that PEDOT:PSS hole transport layers resulted in improved device stability under long-term illumination testing in air compared to devices using MoO x or V 2 O 5 hole transport layers. It is possible that the difference between these findings result directly from differences in test conditions; studies conducted under full illumination condition (rather than dark storage) generally result in higher ambient temperatures which help to remove residual moisture from the PEDOT:PSS and the surrounding device by evaporation. In such circumstances, the hydroscopic nature of the PEDOT:PSS hole transport layer may be of secondary importance. Further work is needed to clarify such issues. Avoiding the ingress of moisture is essential to create stable OPV modules. It has been shown that the WVTR should be less than 10 −6 g m −2 d −1 in OLEDs to achieve suitable lifetimes. [73] However, the global standard for OPV devices has not yet been established. For OPV devices having relatively stable electrodes, Cros et al. [74] showed that a WVTR of 10 −3 g m −2 d −1 was necessary to obtain a lifetime of several years. This less demanding WVTR requirement for OPVs points favorably to the use of low cost encapsulation solutions. Interestingly, replacing fullerene acceptors by nonfullerene acceptor molecules can also increase the air storage stability. Using a nonfullerene acceptor, O-IDTBR, P3HT based solar cells exhibited an efficiency of 6.4%, which is even higher than Adv. Sci. 2018, 5, 1800434 Figure 10. a) Degradation of MEH-PPV expressed as a decrease of the total absorption. b) Acceleration factors for MEH-PPV and P3HT at different solar intensities. Reproduced with permission. [68] Copyright 2011, Elsevier. Figure 11. Normalized efficiency degradation of devices with either PEDOT:PSS (red triangles) or MoOx (blue circles) as a hole transport layer for devices stored under ambient conditions (≈35% RH) and dry air (<5% RH). Reproduced with permission. [71] Copyright 2011, Elsevier. fullerene based P3HT solar cells. More importantly, the stability under ambient dark storage condition of O-IDTBR:P3HT devices was determined to be superior to other fullerene based OPV devices. [50a] The first 60 h witnessed a fast degradation and then PCE remained relatively stable and retained 73% of the initial PCE after 1200 h ambient dark storage. This result confirmed the good stability of fullerene free OPV devices against water and oxygen in the ambient atmosphere. Thermal Fluctuations Thermal fluctuations are a natural consequence of outdoor lifetime testing, with this process also contributing to the degradation of OPV devices. For this reason, thermal cycling tests form an essential component of tests applied to commercially available PV. [75] In outdoor conditions, ambient temperatures can vary over 20 °C in a single day, with such fluctuations being even larger in certain geographic locations. To explore the importance of thermal fluctuations on OPV stability, Wang et al. [76] alternated the storage temperature of PCDTBT-and P3HT-based OPVs between 80 and 25 °C every 12 h over a total period of 300 h. It was found that PCDTBT and P3HT based devices retained 90% and 80% of their original efficiency respectively (see Figure 12). This test was conducted under a nitrogen atmosphere in the dark. It is believed [77] that under outdoor conditions, the degradation caused as a result of thermal fluctuations will be enhanced by the presence of oxygen, moisture and illumination. Indeed, the effect of thermal cycling on the device efficiency and mechanical integrity of P3HT:PCBM based OPV devices has been investigated under even harsher conditions. [78] Here, it was found that thermal cycling between −40 and +85 °C at a heating/cooling rate of ≈1.4 °C min −1 over 200 cycles caused device efficiency to decrease from ≈2.0% to ≈1.5% after the first 5 cycles, with efficiency remaining constant afterward. Figure 1 plots a typical degradation curve of an OPV device. Here, the efficiency undergoes an initial, rapid period of degradation that is termed as "burn-in." The efficiency loss during burn-in varies for different materials; for example an efficiency loss of up to 40% was observed in PCDTBT based OPV devices during burn-in, [14,15,20] while this is as much as 60% in PBDTTT-EFT based OPV devices. [12] The OPV burn-in process is related to device irradiation, as no obvious burn-in is observed under dark storage. [13] Origins of burn-in loss have been attributed to photo-induced reactions in the active layer and the formation of sub-band gap states. [13] Such sub-band gap states in OPV devices are believed to reduce J sc and V oc in two ways. First, they increase the recombination rate, reduce the exciton lifetime and diffusion length and thus reduce steady state charge carrier density. [79] The charge carrier density is directly related to J sc . Secondly, charge carriers can fill sub-band states near the quasi-Fermi level. Even though this does not change the total charge carrier density, [80] such subband gap states can still result in V oc loss. [81] This is reflected in Equation (1), as the quasi-Fermi levels move away from donor HOMO and acceptor LUMO levels and into the energy gap between donor HOMO and below acceptor LUMO levels. [82] Adv. Sci. 2018, 5, 1800434 P3HT/PC 71 BM as a function of storage time (300 h) following a thermal stability test in N 2 and c) IPCE spectra of the devices with P3HT/PC 71 BM or PCDTBT/PC 71 BM before and after thermal stability tests. Reproduced with permission. [76] Copyright 2011, Elsevier. Burn-In Process in OPV The formation of sub-band gap states has been confirmed using photothermal deflection spectroscopy (PDS). [13] Here, PCDTBT:PC 71 BM blend films were deposited on a quartz substrate and exposed to 1 sun equivalent irradiance. PDS absorption spectra were then periodically measured and compared with an unexposed control film. As shown in Figure 13, an increased absorption was observed in the energy region below 1.2 eV and indicated the formation of sub-band gap states. As can be seen, this absorption increase occurs most rapidly during the first 120 h exposure and changes at a similar rate to the decrease in solar cell efficiency observed during burn-in. During the next 240 h, the increasing rate slowed down with the device efficiency also degrading at a slower rate. This indicates that the "burn-in" process lasts for around 120 h and has the same origin as the absorption enhancement below 1.2 eV in the PDS spectra. Photo-induced dimerization of fullerenes is another possible origin of device burn-in, as this reduces the active-layer exciton-harvesting efficiency and thus results in a loss in the short circuit current density. It has been shown that the external quantum efficiency (EQE) loss after exposure to illumination mainly corresponds to the reduced absorption of the fullerene. [83] In a dimerized fullerene, excitons are trapped in the fullerene phase and cannot be separated and collected efficiently; a process resulting in a reduced J sc . By replacing PCBM with the nonfullerene acceptor rhodanine-benzothiadiazole-coupled indacenodithiophene (IDTBR), [84] P3HT:IDTBR based OPV devices lost only 5% of relative PCE after exposure to 1-sun equivalent irradiance over the course of 2000 h. This degradation rate is significantly less than that of P3HT:PCBM devices, which under the same test conditions underwent a relative PCE loss of 34% PCE. This indicates that the use of nonfullerene acceptors may be an effective strategy to increase the stability of OPV devices. In PffBT4T-2OD:PCBM based OPV devices, [51b] an abnormally strong burn-in degradation has been observed, with the PCE dropping from 9.20% to 5.62% after dark storage for 5 days. Here, demixing of the donor/acceptor mixed-phase within the BHJ film was attributed to be the cause of this considerable efficiency loss. Such spontaneous phase separation in mixed amorphous regimes can occur at room temperature and is independent of storage conditions. The authors claimed that this phenomenon is highly dependent on the material combination used in the BHJ film. This study indicates that not all OPV burn-in losses are photo-induced; rather morphological evolution is also a potential degradation mechanism in some specific material systems. In contrast, Pearson et al. [12] working on PBDTTT-EFT:PC 71 BM based OPV devices observed that the nanostructure of the active layer and kinetics of free charge generation were apparently unchanged after burn-in, and thus the initial degradation of device efficiency was attributed to generation of charge trapping states and suppressed charge carrier dissociation. Clearly, the morphological evolution of each BHJ system is highly dependent on the molecular structure of the particular materials used, with more work required to bring the different observations into a coherent framework. Interestingly burn-in losses are nearly negligible if the fullerene acceptor in PffBT4T-2OD based OPV devices is replaced with a nonfullerene derivative. [85] For example, PffBT4T-2OD:EH-IDTBR based OPV devices showed no degradation under constant irradiance stress for over 60 h, with devices having promising stability under a thermal stress of 85 °C (See Figure 14); a result pointing to a promising morphological stability of nonfullerene based PffBT4T-2OD based OPV devices. The improved stability against photo-induced burn-in loss of PffBT4T-2OD:EH-IDTBR OPV devices is attributed to greater resistance to photo-induced electronic trap state formation compared to devices incorporating a PC 71 BM acceptor. These results suggest better stability of fullerene free OPV devices over those using fullerene acceptors. However, the light soaking experiments lasted for only 60 h, which makes it impossible to extract the Ts80 lifetime of the fullerene free OPV devices and so a direct comparison of the published data cannot be made. Summary and Outlook We have reviewed the status of outdoor lifetime studies of OPVs. The reported outdoor operational lifetime of certain OPV modules has now reached a period of several years; a promising result considering that 10 years ago, typical device lifetimes were in the range of a few days to weeks. OPV lifetime studies conducted under laboratory conditions were briefly reviewed. The "o-diagram" methodology and accumulated energy dose analysis can be used to make comparisons between indoor and outdoor lifetime studies, however indoorbased tests do not fully simulate the outdoor environment. Direct measurements of OPV outdoor lifetime were reviewed. Here we discussed the development of experimental systems used in outdoor lifetime studies, with recommendations made to increase the consistency of different outdoor lifetime tests. Long-term outdoor lifetime test results for different OPV material-systems were then summarized. It was highlighted that certain OPV modules fabricated using roll-to-roll processes and encapsulated using flexible PET foils have very promising operational stability when measured under outdoor conditions. In the majority of studies however, OPVs are fabricated using nonscalable techniques and have a limited active area. Nevertheless, such studies are useful in exploring the intrinsic stability of OPV materials and devices when exposed to different geographic locations and climatic conditions. In outdoor lifetime conditions, the irradiance level, temperature, humidity, and thermal fluctuation have been identified as key degradation factors, with their influence on OPV performance and stability discussed. Finally, the burn-in phenomena observed during the initial period of OPV operation is introduced, with burn-in free OPVs based on nonfullerene acceptors being highlighted. The stability of fullerene free OPV devices looks promising based on current research results, especially under thermal stress and light soaking. However, more systematic investigation is needed and outdoor lifetime studies of devices with nonfullerene acceptors are needed. Although considerable progress has been made in outdoor lifetime testing of OPVs, there are still some challenges that remain including the development of a standard outdoor lifetime testing platform and testing strategy. In addition, a comprehensive, predictive method to fully link lifetime tests conducted under indoor (accelerated) conditions to outdoor real-world conditions should be developed. In general, outdoor lifetime testing is generally limited to the most well-established material systems (such as P3HT:PCBM and PCDTBT:PC 70 BM), and thus it will be interesting to extend it to new donor/ acceptor blends having high efficiency-even if such tests are initially performed over a limited period under the ISOS-O-1 basic testing protocol. Figure 14. Normalized PCE of PffBT4T-2OD:EH-IDTBR devices during a) light soaking without UV light, with devices maintained at a temperature below 50 °C, and b) during annealing at 85 °C in a nitrogen atmosphere. Reproduced with permission. [85] Copyright 2017, Wiley-VCH.
12,608.4
2018-06-10T00:00:00.000
[ "Engineering" ]
Normalization and Comparison of Photoplethysmography Between Normal and Patient Groups Using Deep Neural Networks Photoplethysmography (PPG) is easy to perform and provides a variety of measurements, including details of heart rate and arrhythmia. However, automated PPG methods have not been developed because of their susceptibility to motion artifacts and differences in waveform characteristics among individuals. With increasing use of telemedicine, there is growing interest in application of deep neural network (DNN) technology for ecient analysis of vast amounts of PPG data. This study proposes an automatic algorithm incorporating DNNs for individual and patient-group identication; this is achieved by selecting normally measured waveforms, deleting error regions, and normalizing the pulse wave to obtain 10 “section values” that can be easily compared to other waveforms. The proposed algorithm was able to distinguish between patients aged 60–75 years with diabetes and hypertension and healthy subjects aged 25–35 years (AUC = 0.998). On the other hand, errors were frequently observed in identication of individuals (AUC = 0.819). Introduction As the demand for noncontact telemedicine services increases, the amount of medical data directly measured by patients themselves outside the hospital setting is increasing 1 . To improve the reliability of remotely transmitted medical data, a technology capable of capturing the characteristics of individuals and patients with diseases is required 2 . Photoplethysmography (PPG) is one of the most frequently used medical monitoring technologies due its convenience, and it has been employed to predict the likelihood of disease and for identi cation of individuals [3][4][5][6][7][8] . However, as most conventional automatic PPG analyzers have di culty in distinguishing waveforms caused by motion artifacts and electrodeconnection failures, etc., to increase the reliability of automatic analysis requires deep neural network (DNN) technology for normalization of PPG waveforms and to exclude artifacts [9][10][11] . Although many methods for determining the likelihood of disease and identifying individual characteristics have been proposed, they have the limitation of requiring human experts to determine whether the PPG waves are informative or uninformative [5][6][7][8] . Also, these methods have not been validated because it is di cult to obtain su cient data through manual analysis. Therefore, a number of DNN techniques have been applied with the goal of replacing human experts in PPG analysis. However, their accuracy remains low and more training data are required in the machine learning process to increase the accuracy of these DNNs [12][13][14][15][16][17][18][19] . Recently, telemedicine in Korea has generated large amounts of PPG data 20 . Reliable DNN techniques have been developed to select data from the transmitted PPGs and normalize the PPG waveform according to the heart-rate (HR) cycle, while excluding data affected by motion artifacts or electrode connection failures [21][22][23][24] . It is expected that a novel DNN algorithm for normalizing the PPG data of healthy subjects and patients with diseases will improve automatic PPG analysis for individual and patient-group identi cation. The six DNN models in this study identify features of the heartbeat in PPG waveforms and evaluate whether the PPG waves are informative. The algorithm selects the informative region in the PPG and divides it into 10 sections according to the phases of the HR cycle. Then, it calculates the mean ± standard deviation of each section, for use as criteria in individual and patient-group identi cation. This study was performed to evaluate the personal and patient group identi cation accuracy of our new algorithm. Results Up Personal identi cation using PPG data Figures 1a-c show example PPG data from subjects #1, #3, and #15, respectively, for identi cation based on previously obtained section-speci c data. Although subjects #1 and #3 both belonged to the young healthy group, their data showed some differences, as did those of subject #1 compared to one of the elderly patients (subject #15). Each individual's statistical data were obtained during 10 sessions, and the mean PPG velocity ± standard deviation was calculated for each session in advance by analyzing the data of at least 120 heartbeats. The statistical data were obtained in ve measurement periods and were normalized according to the heart rate cycle. In Fig. 1, the top (red) and bottom (blue) lines are the criteria for subject #1, and were calculated as the sum and difference of the mean and standard deviation per session, respectively. The newly measured values of subject #1 are consistent with the criteria of subject #1, shown as lines in Fig. 1a. As shown in Fig. 1b, the PPG waveform of normal subject #3 was similar to that of normal subject #1, in that the re ection wave was larger than that of the patient group. However, there was a difference in re ection wave occurrence. If this method can nd differences even among individuals within the same group, it may be possible to distinguish individuals based on the waveform characteristics. However, common characteristics among subjects in the normal group can lead to many errors in individual identi cation. As shown in Fig. 1c, subject #15, who suffered from hypertension and diabetes, was older than the healthy subjects in the control group. As the patient group shows distinct differences from the healthy controls, the possibility of misidentifying a patient as a healthy subject is low. Figures 1d-f compare the PPG data of subjects #15, #10, and #1 with the statistical data of subject #15. The newly measured PPG data of subject #15 show high consistency with the statistical data of subject #15, as shown in Fig. 1d. Figure 1e shows that, although subjects #15 and #10 were the same age and had small re ection waves on PPG in common, they were recognized as distinct individuals based on differences in their pulsation waves. However, the common features and simplicity of the waveforms among the subjects in the patient group lead to a high probability of individual identi cation failure. The receiver operating characteristic (ROC) curve in Fig. 2 shows the sensitivity (true-positive rate, TPR) and false-positive rate (FPR), which distinguishes each individual from all other individuals. Z-values are normalized to the differences between newly measured PPG data and previous statistical data (by averaging and dividing by the standard deviation). The area under the curve (AUC) value of 0.819 indicates that identi cation of individuals is inaccurate, but possible to a limited extent. Patient group identi cation using PPG data Table 1 shows the differences of beat-to-beat PPG waves between the healthy control and patient groups in each of the 10 sessions. Distinct differences between the younger healthy group and older patient group were observed in sessions 2-6. After normalizing the differences by the standard deviation, the waveform was clearly larger in the healthy control subjects than the patients in session 5 (p = 0.05). The patients' waveforms were generally larger than those of the healthy controls in sessions 2 and 3. The lines in Fig. 3a and b show the sum and difference of the mean and standard deviation per session of the PPG waves of younger healthy controls. In Fig. 3a, the newly measured PPG data from subject #6 in each session are shown as circles between the upper and lower lines. The lines in Fig. 3a and b shows the statistical data obtained from younger healthy control subjects. Figure 3b shows the differences between the statistical data of the control subjects and the values for subject #14, who suffered from diabetes and hypertension. In Fig. 3c, as the PPG values of subject #14 are between the lines corresponding to the mean ± standard deviation of the elderly patients with diabetes and hypertension, subject #14 was judged to belong to the patient group. However, Fig. 3d shows that the data of subject #6 are distinct from those of the patient group. Figure 4a shows the accuracy of the data classi cation (healthy control or patient group) based on analysis of the ROC curve, calculated from the difference between the measured data and the Z-values. Figure 4b shows the classi cation accuracy of unknown and newly obtained data for 353 individuals (healthy controls, n = 150; patients, n = 203) not included in the model-building process. The AUC value was 0.998. Discussion A number of previous studies used DNN techniques to analyze PPG and obtain reliable parameters related to cardiovascular disease 10,[12][13][14][15][16][17][18][19] . The recognition score (RS) obtained from DNN models in this study indicate whether the PPG waveform is caused by heartbeat or errors 22 . Previous studies have also shown that HR and heart rate variability (HRV) obtained by an algorithm incorporating DNN and selecting only high-RS PPG data are more reliable than those of conventional algorithms that produce HR and HRV from PPG without removing ambiguous data that includes errors 23 . In addition, when the RS is low, SPO2 and HR measured simultaneously using conventional SPO2 devices can show large error 24 . These studies showed that DNN algorithms are useful for determining the reliability of remotely measured vital signals [21][22][23][24] . If the algorithm proposed in this study is applied to a remote medical information system, it is possible to increase the e ciency and reliability of medical data collection by automatically requesting the patient to remeasure the questionable data when some data differ signi cantly from previous data. A remote medical information system that can remove unreliable data automatically, thus reducing the amount of data that must be inspected by staff, will decrease operation costs. As shown in Fig. 2, identi cation of individuals using our algorithm is not entirely reliable because of the high error rate (> 20%). It would be useful if the system could distinguish mistakenly transmitted PPG waveforms of family members coresiding with the patient. When the patient's re-transmitted data still differ from their statistical norm, a message could be sent to the medical staff instructing them to inspect the medical data and identify the cause of the changes in the patient's condition. Therefore, the individual identi cation function can contribute to the e cient management of telemedicine data. As shown in Fig. 4, patient-group classi cation accuracy was very high (AUC − 0.998), because the peaks corresponding to heartbeat and re ection were higher in the young healthy control group than in the older patient group. However, there were signi cant differences in both age and disease type between the healthy control and patient groups in this study, so it is not clear which of these two factors the differences were due to. Takazawa et al. reported larger re ection waves in young people. In addition, the amplitudes of re ection waves are lower in patients with cardiovascular disease than in healthy controls of the same age 6, 25 . The collection and analysis of PPG waveform characteristics from more patients according to age and disease type would enable a remote arti cial-intelligence system to determine the likelihood of diseases accurately. Although PPG waveforms can elucidate a patient's condition, most manual methods of PPG analysis have not been veri ed. Our proposed algorithm is useful for collecting various types of PPG waveforms and identifying patient's conditions based on speci c characteristics, and will contribute to the development of new monitoring techniques for patients in remote locations. Methods PPG analysis system Figure 5 shows the PPG waveform analysis system. The data are transmitted through a telemedicine network, and whether the vital signs are normal or have changed for any reason is determined. Initially, six DNN models inspect PPG waveforms associated with heartbeats, "blood pressure re ection" from arteries, and motion artifacts. Abnormal waves (RS < 80) are removed, because previous studies have shown that a low RS in PPG analysis indicates that the waveform is related to motion artifacts or probe connection failures rather than the heartbeat. The DNNs also determine the start and end of the heartbeat cycle for a normal PPG waveform. Each PPG waveform corresponding to a heartbeat is normalized in terms of amplitude and differentiated to obtain the PPG velocity (vPPG). Beat-to-beat vPPG is divided into 10 sections, the mean values of which represent the characteristics of the waveform. The mean ± standard deviation values of the 10 sections re ect the characteristics of an individual's PPG waveform. These "section values" of beat-to-beat vPPGs may differ among patients, even those suffering from the same diseases. However, commonalities among patients with the same disease may also be seen, and may be distinct from those of healthy subjects. The individual mean ± standard deviation vPPG section values are obtained through ve measurements conducted at different times. They are also saved in the system and used as criteria to assess the accuracy of individual and patient-group identi cation for each newly assessed individual. Figure 6 shows the PPG waveform-processing steps. Each DNN in the system consists of one input layer (124 × 1), two hidden layers (124 × 1, 124 × 1), and one output layer (21 × 1) 23 . Three of the DNNs nd the set (S) point, the onset (O) point, and the peak point of the re ection wave (W point), which are related to heart contractions 11 . Another DNN nds the Zpoint associated with blood pressure re ection from peripheral arteries. The other two DNNs nd the start and end points of uninformative sections of data, which are mostly caused by motion artifacts and probe connection failures. To determine whether the PPG waveforms represent normal heartbeats or noise, the recognition rates of the S, O, and W points of PPG waveforms during the beat-to-beat period should be considered 23 . Error regions should not be included in the beat-to-beat period. RS is calculated by summing the recognized S, O, and W points and subtracting the recognized error regions. The RS value of a heartbeat cycle can be used as an index of the reliability of SPO2 and HR data measured simultaneously using an SPO2 device 24 . Previous studies indicated that the W point shows the highest reliability in terms of identifying heartbeats with DNNs 22 . Thus, the PPG waveforms are divided at the W points to normalize the beat-tobeat vPPG. Identi cation of individuals and diseases During measurement, 40 beat-to-beat vPPGs are usually obtained and transmitted to the system. If the percentage of normal heartbeats with a high RS (≥ 80) does not exceed 90% (of all heartbeats), the system sends a message to the patient indicating the need for re-measurement. Subsequently, all the reliable measured beat-to-beat vPPG are divided into 10 sections and the value of each section can be represented as the mean ± standard deviation. For ease of comparison with other values in the same section, the statistical ranges are normalized as Z-values using the following equation: Z-value = (mean of the criterion data − mean of the measured data)/(SD of the criterion data) Criterion data can be obtained from individuals or groups of patients with the same disease. In this study, 15 subjects transmitted PPG waveforms on 428 occasions through the telemedicine system; 75 of the transmitted datasets were used to create 15 individual criterion data for individual identi cation and 2 group criterion data for the healthy control and patient group. Absolute Z-values were calculated for 10 sections, and the average sectional Z-values were analyzed to determine how closely the newly measured data (353) conformed to the criterion data of individuals, the healthy control, and the patient groups. All experimental protocols of clinical trial were approved by the institutional review board of Kangwon National University (KNUH-2020-06-008-008). All methods were carried out in accordance with relevant guidelines and regulations. Remotely transmitted data from eight healthy control subjects aged 25-37 years, and seven patients aged 63-78 years with diabetes and hypertension, were used. Table 2 shows the diseases, ages, and numbers of the subjects. The data from the rst ve measurements were used to determine the criteria for individual and patient-group identi cation. The other data were then used to evaluate the accuracy of the individual and patient-group identi cation. Figure 1 Example personal identi cation data from subjects #1 and #15. Figure 2 Receiver operating characteristic (ROC) curves for personal identi cation using newly measured PPG data and statistical data. Figure 3 Example patient group identi cation data from subjects #6 and #14. Figure 4 Accuracy of the classi cation of unknown data to the healthy control and patient groups Figure 5 Block diagram of the PPG waveform analysis system, which incorporates DNN models to select normal data and identify critical changes.
3,824.8
2021-06-02T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Weighted Effect Coding for Observational Data with wec Weighted effect coding refers to a specific coding matrix to include factor variables in generalised linear regression models. With weighted effect coding, the effect for each category represents the deviation of that category from the weighted mean (which corresponds to the sample mean). This technique has particularly attractive properties when analysing observational data, that commonly are unbalanced. The wec package is introduced, that provides functions to apply weighted effect coding to factor variables, and to interactions between (a.) a factor variable and a continuous variable and between (b.) two factor variables. Introduction Weighted effect coding is a type of dummy coding to facilitate the inclusion of categorical variables in generalised linear models (GLM). The resulting estimates for each category represent the deviation from the weighted mean. The weighted mean equals the arithmetic mean or sample mean, that is the sum of all scores divided by the number of observations As we will show, weighted effect coding has important advantages over traditional effect coding if unbalanced data are used (i.e. with categories holding different numbers of obwervations), which is common in the analysis of observational data. We describe weighted effect coding for categorical variables and their interactions with other variables. Basic weighted effect coding was first described in 1972 (Sweeney and Ulveling, 1972) and recently updated to include weighted effect interactions between categorical variables (Grotenhuis et al., 2017b,a). In this paper we develop the interaction between weighted effect coded categorical variables and continuous variables. All software is available in the wec package. Treatment, effect, and weighted effect coding Weighted effect coding is one of various ways to include categorical (i.e., nominal and ordinal) variables in generalised linear models. As this type of variables is not continuous, so-called dummy variables have to be created first which represent the categories of the categorical variable. In R, categorical variables are handled by factors, to which different contrasts can be assigned. For unordered factors, the default is dummy or treatment coding. With treatment coding, each category in the factor variable is tested against a preselected reference category. This coding can be specified with contr.treatment. Several alternatives are available, including orthogonal polynomials (default for ordered factors and set with contr.poly), helmert coding to compare each category to the mean of all subsequent categories (contr.helmert), and effect coding (contr.sum). In effect coding (also known as deviation contrast or ANOVA coding), parameters represent the deviation of each category from the grand mean across all categories (i.e., the sum of arithmetic means in all categories divided by the number of categories). To achieve this, the sum of all parameters is constrained to 0. This implies that the possibly different numbers of observations in categories is not taken into account. In weighted effect coding, the parameters represent the deviation of each category from the sample mean, corresponding to a constraint in which the weighted sum of all parameters is equal to zero. The weights are equal to the number of observations per category. The differences between treatment coding, effect coding, and weighted effect coding are illustrated in Figure 1, showing the mean wage score for 4 race categories in the USA. The grey circles represent the numbers of observations per category, with whites being the largest category. In treatment coding, the parameters for the Black, Hispanic and Asian populations reflect the mean wage differences from the mean wage in the white population that serves here as the reference category. The dotted double-headed arrow in Figure 1 represents the effect for Blacks based on treatment coding, with whites as the reference category. In effect coding, the reference is none of the four racial categories, but the grand mean. This mean is the sum of all four (arithmetic) mean wages divided by 4, and amounts to 49,762 and is shown as the dashed horizontal line in Figure 1. The effect for Blacks then is the difference between their mean wage score (37,664) and the grand mean, represented by the dashed double-headed arrow and amounts to (37,664 -49,762 =) -12,096. Weighted effect coding accounts for the number of observations per category, and thus weighs the mean wages of all categories first by the different number of observations per category. Because whites outnumber all other other categories the weighted (sample) mean (= 52,320) is much larger than the unweighted (grand) mean, and is represented by the horizontal continuous line in Figure 1. As a consequence, the effect for If the data are balanced, meaning that all categories have the same number of observations, the results for effect coding and weighted effect coding are identical. With unbalanced data, such as typically is the case in observational studies, weighted effect coding offers a number of interesting features that are quite different from those obtained by unweighted effect coding. First of all, in observational data the sample mean provides a natural point of reference. Secondly, the results of weighted effect coding are not sensitive to decisions on how observations were assigned to categories: when categories are split or combined, the grand mean is likely to shift as it depends on the means within categories. In weighted effect coding the sample mean of course remains unchanged. Therefore, combining or splitting other categories does not the change the effects of categories that were not combined or split. Finally, weighted effect coding allows for an interpretation that is complementary to treatment coding, and seems particularly relevant when comparing datasets from different populations (e.g., from different countries, or time-periods): the effects represent how deviant a specific category is from the sample mean, while accounting for differences in the composition between populations. Looking at Figure 1, this would allow for the finding that the Black population would have become more deviant over time in a situation where the whites grew in numbers (thus shifting the weighted mean upwards) while the wage gap between Blacks and whites remained constant (the dotted line, as would be estimated with treatment coding). The coding matrix for weighted effect coding is shown in Table 1. In effect coding, the columns of the coding matrix would all have summed to 0. This can be seen in the first example of the next section. The coding matrix for weighted effect coding is based on the restriction that the columns multiplied by Examples This article introduces the wec package and provides functions to obtain coding matrices based on weighted effect coding. The examples in this article are based on the PUMS data.frame, which has data on wages, education, and race in the United States in 2013. It is a subset of 10,000 randomly sampled observations, all aged 25 or over and with a wage larger than zero, originating from the PUMS 2013 dataset (Census, 2013). Because the calculation of weighted effect coded variables involves numbers of observations, it is important to first remove any relevant missing values (i.e., list-wise deletion), before defining the weighted effect coded variables. > library(wec) > data(PUMS) We first demonstrate the use of standard effect coding, which is built into base R, to estimate the effects of race on wages. To ensure that the original race variable remains unaltered, we create a new variable 'race.effect'. This is a factor variable with 4 categories ('Hispanic', 'Black', 'Asian', and 'White'). Effect coding is applied using the contr.sum() function. 'White' is selected as the omitted category by default. Then, we use this new variable in a simple, OLS regression model. This is shown below: > PUMS$race.effect <-factor(PUMS$race) > contrasts(PUMS$race.effect) <-contr.sum (4) The results of regressing wages on the effect coded race variable (only the fixed effects are shown above) indicate that the grand mean of wages is 49,762. In Figure 1 this grand mean was shown as the horizontal, dashed line. This is the grand (unweighted) mean of the average (arithmetic) wages among Hispanics, Blacks, Asians, and white Americans. The mean wage among Blacks ('race.effect2', refer to the coding matrix to see which category received which label) is, on average, 12,096 dollar lower than this grand mean. This was shown as the dashed double-headed arrow in Figure 1. The wages of Asians ('race.effect3'), on the other hand, are on average 16,135 dollar higher than the grand mean. We already saw in Figure 1 that not only the average wages vary across races, but also that the number of Hispanics, Blacks, Asians, and whites are substantially different. As these observational data are so unbalanced, the grand mean is not necessarily the most appropriate point of reference. Instead, the sample (arithmetic) mean may be preferred as a point of reference. To compare and test the deviations of all four mean wages from the sample mean, weighted effect coding has to be used: > PUMS$race.wec <-factor(PUMS$race) > contrasts(PUMS$race.wec) <-contr.wec(PUMS$race.wec, "White") > contrasts(PUMS$race.wec) Hispanic The example above again creates a new variable ('race.wec') and uses the new contr.wec() function to assign a weighted effect coding matrix. Unlike many other functions for contrasts in R, contr.wec() requires that not only the omitted category is specified, but also the specification of the factor variable for which the coding matrix is computed. The reason for this is that, as seen in Table 1, to calculate a weighted effect coded matrix, information on the number of observations within each category is required. The coding matrix now shows a (negative) weight (which is the ratio between the number of observations in category x and the omitted category 'Whites') for the omitted category, which was -1 in the case of effect coding. In the regression analysis, the intercept now represents the sample mean and all other effects represent deviations from that sample mean. This corresponds to the continuous double-headed arrow and line in Figure 1. For instance, Blacks earn on average 14,654 dollars less compared to the sample mean. To be able to test how much whites' mean wage differs from the sample mean, the omitted category must be changed and subsequently a new variable is to be used in an updated regression analysis: Here, the omitted category was changed to Blacks. Note that the intercept as well as the estimates for Hispanics and for Asians did not change. This is unlike treatment coding, where each estimate represents the deviation from the omitted category (in treatment coding: the reference category). The new estimate shows that whites earn 2,128 dollar more than the mean wage in the sample. In the remainder of this article we use 'White' as the omitted category by default, but in all analyses the omitted category can be changed. Next, we control the results for respondents' level of education using a continuous variable (which is mean centred to keep the intercept at 52,320 The results show that one additional point of education is associated with an increase in wages of 9,048. This represents the average increase of wages due to education in the sample while controlling for race. The estimates for the categories of race again represent the deviation from the sample mean controlled for education. When more control variables are added, the weighted effect coded estimates still represent the deviation from the sample mean, but now controlled for all other variables as well. Comparing these estimates to those from the model without the control for education suggests that educational differences partially account for racial wage differences. In the next section, we discuss how weighted effect coded factor variables can be combined with interactions, to test whether the wage returns of educational attainment vary across race. Interactions Weighted effect coding can also be used in generalised linear models with interaction effects. The weighted effect coded interactions represent the additional effects over and above the main effects The R Journal Vol. 9/1, June 2017 ISSN 2073-4859 obtained from the model without these interactions. This was recently shown for an interaction between two weighted effect coded categorical variables (Grotenhuis et al., 2017a). In this paper we address the novel interaction between weighted effect coded categorical variables and a continuous variable. In the previous section a positive effect (9,048) of education on wage was found. The question is whether this effect is equally strong for all four racial categories. With treatment coding, an interaction would represent how much the effect of (for instance) education for one category differs from the educational effect in another category that was chosen as reference. With effect coding, the interaction terms represent how much the effect of (for instance) education for a specific category differs from the unweighted main effect (which here happens to be 8,405). Because the data are unbalanced, weighted effect coding is considered here an appropriate parameterisation. In weighted effect coded interactions the point of reference is the main effect in he same model but without the interactions. In our case this educational main effect on wage is 9,048 (see Figure 2), which we calculated in the example above. Let's assume we already know, as will be confirmed in later examples, that the estimates for the effect of education on wages among whites is 9,461, among Hispanics 5,782, among Asians 12,623, and finally among Blacks the effect is 5,755. The weighted effect coded interactions then are, respectively, 9, 461 − 9, 048 = 413; 5, 782 − 9, 048 = −3, 266; 12, 623 − 9, 048 = 3, 575; and 5, 755 − 9, 048 = −3, 293. These estimates represent how much the education effect for each group differs from the main effect of education in the sample. With weighted effect coded interactions, one can obtain these estimates simultaneously with the mean effect of education. To do so, a coding matrix has to be calculated. This coding matrix is based on the restriction that if the above-mentioned effects are multiplied by the sum of squares of education within each category, the sum of these multiplications is zero. This is the weighted effect coded restriction for interactions. The sum of squares (SS) of the continuous variable x (education) for level j of the categorical variable (race) is calculated as: where, for the example, x ij denotes the education of a person i in race j, I denotes the total number of people in race j and xj denotes the mean of education for people in race j. To impose this restriction we replaced the weights in Table 1 by the ratio between two sums of squares to obtain a new coding matrix (see Table 2) (Lammers, 1991). The denominator of this ratio is the sum of squares of education among the omitted category. If we multiply this coding matrix with the mean centred education variable, then we get three interaction variables, and the estimates for these variables reflect the correct deviations from the main education effect together with the correct statistical tests. To have the intercept unchanged, we finally mean centred the new interaction variables within each category of race. In previous approaches to interactions with weighted effect coding (West et al., 1996;Aguinis, 2004), it was not possible to have the effects of the first order model unchanged. This is because a restriction to the coding matrix was used based on the number of observations rather than on the sum of squares used here. An attractive interpretation of interaction terms is provided: as the (multiplicative) interaction terms are orthogonal to the main effects of each category, these main effects remain unchanged upon adding the interaction terms to the model. The interaction terms represent, and test the significance of, the additional effect to the main effects. Figure 2. The dashed blue and red lines represent the effects of education for Blacks and whites, respectively (Hispanics and Asians not shown here). The dashed black line represents the effect of education that is the average of the effects among the four racial categories. This is the effect of education one would estimate if effect coding was used to estimate the interaction, and the differences in slopes between this reference and each racial categories would be the interaction parameters. However, the observations of whites influence the height of the average effect of education in the sample to a larger extent than the Blacks, due to their larger sum of squares. Therefore, the weighted effect of education, shown as the continuous line, is a more useful reference. The sum of squares are represented in Figure 2 by grey squares, and are distinct from the grey circles representing frequencies in Figure 1. The sum of squares pertain to the complete regression slope, and therefore the position of the grey squares was chosen arbitrarily at the center of the x-axis. The logic of interactions between weighted effect coded dummies and a continuous variable is demonstrated in Finally, we briefly address the interaction between two weighted effect coded categorical variables. Unlike dummy coding and effect coding, the interaction variables are not simply the multiplication of the two weighted effect coded variables. Instead, partial weights are assigned to the interaction variables to obtain main effects that equal the effects from the model without these interactions (see Table 3 for the weights, for in-depth matrix information about how to create these partial weights please visit http://ru.nl/sociology/mt/wec/downloads/). The orthogonal interaction effects in our example denote the extra wage over and above the mean wages found in the model without these interactions, no matter whether the data are unbalanced or not. In case the data are completely balanced, the estimates from weighted effect coding are equal to those from effect coding, but they can be quite different in effect size and associated t-values when the data are unbalanced. Examples of interactions To demonstrate interactions that include weighted effect coded factor variables, we continue our previous example. For these interactions, the functions in the wec package deviate a little from standard R conventions. This is a direct result of how weighted effect coding works. With many forms of dummy coding, interaction variables can be created by simply multiplying the values of the two variables that make up the interaction. This is not true for weighted effect coding, as the coding matrix for the interaction is a function of the numbers of observations of the two variables that interact. So, instead of multiplying two variables in the specification of the regression model in typical R-fashion, a new, third, variable is created prior to specifying the regression model and then added. Here, we refer to these additional variables as the 'interaction' variable. Interaction variables for interacting weighted effect coded factor variables are produced by the wec.interact() function. The first variable entered ('x1') must be a weighted effect coded factor variable. The second ('x2') can either be a continuous variable or another weighted effect coded factor variable. By default, this function returns an object containing one column for each of the interaction variables required. However, by specifying output.contrasts = TRUE, the coding matrix (see Table 2) is returned: The example above shows the coding matrix for interacting the (weighted effect coded) race variable with the continuous education variable. The omitted category is, again, 'Whites' (category 4), and the coding matrix shows the ratio of sum of squares as was defined in Table 2 The wec.interact function is now called without the output.contrasts = TRUE option. The first specification is the factor variable and the second term is the continuous variable. The results are stored in a new variable. This new interaction variable is entered into the regression model in addition to the variables with the main effects for race and education. The results show that the returns to education, in terms of wages, for Hispanics and Blacks are lower than the average returns to education in the sample, and the returns to education are higher among Asians than it is in the sample as a whole. Note that without additional control variables, all effects for race and for education, as well as the estimate for the intercept, remained unchanged compared to previous examples after including the (weighted effect coded) interaction variable. Note that if one wants to estimate the effects and standard errors for the omitted category, in this case 'Whites', not only the contrasts for the categorical variable need to be changed (as demonstrated above), but also the interaction variable needs to be updated. Below, we specify the interaction between the race variable with a factor variable differentiating respondents who have a high school diploma and those who have a higher degree. Of course, both are weighted effect coded: > PUMS$education.wec <-PUMS$education.cat > contrasts(PUMS$education.wec) <-contr.wec(PUMS$education.cat, "High school") > PUMS$race.educat <-wec.interact(PUMS$race. We created a new categorical variable 'education.wec' and assigned a coding matrix based on weighted effect coding, with 'High school' as the omitted category. The results show that respondents with a degree on average earn 14,343 dollar more than the sample average (52,320). Hispanics benefit 7,674 dollar less from having a degree compared to the average benefit of a degree, while Asians benefit 4,022 dollar more. All in all, the results are very similar to those in the previous model with the continuous variable for education. It should be noted that in the model with interactions between weighted effect coded factor variables, the intercept again shows the same value, representing the average wage in the sample. Just like with the previous examples, the omitted estimates and standard errors (for instance the income effect of Hispanics without a degree) can be obtained by changing the omitted categories in the weighted effect coded factor variables, and by re-calculating the interaction variable(s). Conclusion This article discussed benefits and applications of weighted effect coding. It covered weighted effect coding as such, interactions between two weighted effect coded variables, and interactions with a weighted effect coded variable and an continuous variable. The wec package to apply these techniques in R was introduced. The examples shown in this article were based on OLS regression, but weighted effect coding (also) applies to all generalised linear models. The benefits of using weighted effect coding are apparent when analysing observational data that, unlike experimental data, typically do not have an equal number of observations across groups or categories. When this is the case, the grand mean is not necessarily the appropriate point of reference. Consequently estimates of effects and standard errors based on weighted effect coding are not sensitive to how other observations are categorised. With weighted effect coding, compared to treatment coding, no arbitrary reference category has to be selected. Instead, the sample mean serves as a point of reference. With treatment coding, selecting as a reference a category with a small number of observations and a deviant score can lead to significant results while this reference category has little contribution to the overall sample mean. When weighted effect coded variables are used in interactions, the main effects remain unchanged after the introduction of the interaction terms. In previous, related, approaches this was not possible (West et al., 1996;Aguinis, 2004). This allows for the straightforward interpretation that the interaction terms represent how much the effect is weaker / stronger in each category. That is, when interacting with treatment coded categorical variables, the so-called 'main' effect refers to the reference category, whereas with weighted effect coding the unconditional main (/mean) effect is shown. As such, it can be used to test the assumption that estimated effects do not vary across groups. It should be noted that the R-square of regression models does not depend on which type of dummy coding is selected. This means that the predicted values based on models using treatment coding, effect coding, or weighted effect coding, will be exactly the same. Yet, as each type of dummy coding selects a different point of reference, the interpretation of the estimates differs and a different statistical test is performed. To conclude, the wec package contributes functionality to apply weighted effect coding to factor variables and interactions between (a.) a factor variable and a continuous variable and between (b.) two factor variables. These techniques are particularly relevant with unbalanced data, as is often the case when analysing observational data.
5,577.6
2017-01-01T00:00:00.000
[ "Computer Science" ]
Laser Cooling of Molecular Anions We propose a scheme for laser cooling of negatively charged molecules. We briefly summarise the requirements for such laser cooling and we identify a number of potential candidates. A detailed computation study with C$\_2^-$, the most studied molecular anion, is carried out. Simulations of 3D laser cooling in a gas phase show that this molecule could be cooled down to below 1 mK in only a few tens of milliseconds, using standard lasers. Sisyphus cooling, where no photo-detachment process is present, as well as Doppler laser cooling of trapped C$\_2^-$, are also simulated. This cooling scheme has an impact on the study of cold molecules, molecular anions, charged particle sources and antimatter physics. We propose a scheme for laser cooling of negatively charged molecules. We briefly summarise the requirements for such laser cooling and we identify a number of potential candidates. A detailed computation study with C − 2 , the most studied molecular anion, is carried out. Simulations of 3D laser cooling in a gas phase show that this molecule could be cooled down to below 1 mK in only a few tens of milliseconds, using standard lasers. Sisyphus cooling, where no photo-detachment process is present, as well as Doppler laser cooling of trapped C − 2 , are also simulated. This cooling scheme has an impact on the study of cold molecules, molecular anions, charged particle sources and antimatter physics. Molecular anions play a central role in a wide range of fields: from the chemistry of highly correlated systems [1][2][3][4] to atmospheric science to the study of the interstellar-medium [5][6][7][8][9]. However, it is currently very difficult to investigate negative ions in a controlled manner at the ultracold temperatures relevant for the processes in which they are involved. Indeed, at best, temperatures of a few kelvins have been achieved using supersonic beam expansion methods or trapped particles followed by electron cooling, buffer gas cooling or resistive cooling [10][11][12][13][14]. The ability to cool molecular anions to sub-K temperatures would finally allow investigation of their chemical and physical properties at energies appropriate to their interactions. Furthermore, anionic molecules at mK temperatures can also play an important role in (anti)atomic physics, where copious production of sub-K antihydrogen atoms currently represents the dominant challenge in the field. Sympathetic cooling of antiprotons via laser-cooled atomic negative ions that are simultaneously confined in the same ion trap has been proposed as a method to obtain sub-K antiprotons [15]. As an alternative to this yet-to-be-realized procedure, ultra-cold molecular anions could replace atomic anions in this scheme, and would thus facilitate the formation of ultra-cold antihydrogen atoms. More generally, cooling even a single anion species would be the missing tool to cool any other negatively charged particles (electrons, antiprotons, anions) via sympathetic cooling. In this letter we present a realistic scheme for laser cooling of molecular anions to mK temperatures. Laser cooling of molecules has been achieved only for a very few neutral diatomic molecules (SrF, YO, CaF) [16][17][18]. Furthermore, even if well established for neutral atoms and atomic cations, laser cooling techniques have so far never been applied to anions [19]. This is because in atomic anions, the excess electron is only weakly bound by quantum-mechanical correlation effects. As a result, only a few atomic anions are known to exhibit electricdipole transitions between bound states: Os − , La − and Ce − [20][21][22]. For molecules the situation is quite different because their electric dipole can bind an extra electron, and even di-anions have been found to exist [23]. For instance, polar molecules with a dipole exceeding 2.5 Debye exhibit dipole-bound states. Highly dipolar molecules such as LiH − , NaF − or MgO − possess several such dipole-bound states [24,25], and valence anionic states exist as well. For simplicity, in this letter we only focus on diatomic molecules, even if the Sisyphus laser cooling techniques proposed here can also be applied to trapped polyatomic molecules [26]. In the Supplementary material, Table I [27], we present a review of most of the experimental as well as theoretical studies of diatomic anions, with useful references, if further studies are required. The first excited state (and sometimes even the ground state) of many molecular anions lie above their neutralization threshold, such as in the case of H − 2 , CO − , NO − , N − 2 , CN − or most of systems with 3, 4 or 11 outermost electrons. For this reason, these anions are not stable against auto-detachment processes and exhibit pure rovibrational transitions with ∼ 100 ms lifetimes. Even if such long-lived states can still be of interest for Sisyphus cooling, for narrow-line cooling or for Doppler laser cooling in traps [26,28], pure electronic transitions are preferred for rapid laser cooling. Transitions of ∼ 100 ns lifetime can be found between well-separated electronic states (typically B↔X states), whereas transitions in the infrared region between electronic states (typically A↔X states) have longer lifetimes of ∼ 100 µs. Some challenges in laser-cooling molecular ions have been described in [29]. For direct laser cooling of neutral diatomic molecules, a key ingredient is a good branching ratio, i.e. Franck-Condon factor (FC factor), between vibrational levels (SrF, YO, CaF all have more than a 98% branching ratio on the A 2 Π 1/2 (v = 0, J = 1/2) ← X 2 Σ(v = 0, N = 1) transition). Even with these considerations, the choice of the most suitable candidate is not obvious and is a compromise between using fast electronic transitions and choosing extremely good FC factors. Indeed, molecules with good FC factors can be found for weak dipole-valence bound transitions or for anionic molecules with 6 or 12 outermost electrons but with arXiv:1506.06505v1 [physics.atom-ph] 22 Jun 2015 forbidden dipole transitions. FC factors greater than 70% can be found in systems with 8, 14 or higher numbers of outermost electrons or in molecules which include Li or Al atoms [30,31]. Unfortunately the corresponding transitions are often in the deep infrared region. The best compromise seems to be the systems with 9 outermost electrons having X( levels. A list of such systems is given in Table I. Clearly the most studied system is C − 2 , with a perfectly known spectrum. and 96 % branching ratio, respectively. Besides, this anion has the notable advantage of not presenting any hyperfine structure. As a potential further benefit of this system, we mention that through laser photo-detachment of cold C − 2 , we could produce cold C 2 molecules, important in combustion physics and astrophysics. Even if more studies on laser cooling are needed, C 2 looks like a suitable candidate to be further laser cooled near 240 nm on the 0-0 Mulliken band (d 1 Σ + u ←X 1 Σ + g ) which has an extremely favorable branching ratio of 99.7 % [32]. We will therefore concentrate on C − 2 as a benchmark to study laser cooling of anionic molecules. Note however that several other molecules, such as BN − or AlN − , may be used as well. They offer very similar structure probabilities with better FC factors (higher than 98%) but with a B 2 Σ →A 2 Π decay channel that is absent in the homonuclear case of C − 2 . Contrary to C − 2 , heteronuclear molecules present a closed rotational level scheme. However, further spectroscopic studies are clearly required for such systems, as well as for other promising ones, such as metal-oxyde systems (FeO − , NiO − ) or hydride ones (CoH − ). In order to study laser cooling of C − 2 we have performed three types of simulations. The first one is "standard" laser cooling in the gas phase (thus with no strong external fields present); the second one is laser cooling in a Paul trap; and the last one is Sisyphus cooling of trapped ions in a Penning trap. The simulations are performed with the C++ code described in [28], which now also includes full N-body space charge effects [33]; the Lorentz force is solved by using Boris-Buneman integration algorithms [34,35]. Briefly, a Kinetic Monte Carlo algorithm gives the exact time of events (absorption or emission of light) when solving the rate equations to study laser excitation of the molecules under the effect of Coulomb, light scattering, dipolar, Stark and Zeeman forces. The C − 2 energy levels and required laser transitions are shown in Fig. 1(a). The lifetime of the B-state is 75 ns [37,38] and wavelengths from its first vibrational level to the X state vibrational levels are 541, 598, 667, 753, 863, 1007, 1206 nm with a corresponding transition strength probability of 72, 23, 5, 0.8, 0.1, 2 × 10 −4 , 3 × 10 −5 , 4 × 10 −6 (percentage of the FC), from [39,40]. The lifetime of the A-state is 50 µs with wavelengths of 2.53 µm (branching ratio of 96%) and 4.57 µm (branching ratio of 4%). In order to close the rotational transition cycle, two lasers are required for each of these vibrational transitions to couple X(N = 0, 2) to A(N = 1, J = 1/2), see Fig. 1(b). For our simulations, the temperature along one axis is calculated from the deviation of 50% of the central velocities' histogram. The so-called 3D-temperature (T 3D ) is the quadratic mean of the three one-dimensional temperatures (x,y,z). For our first simulation, we study a possible experiment based on deceleration of an anionic beam. In contrast to neutral cold molecules which are difficult to pro-duce at low velocity, and in spite of the development of techniques such as velocity filtering, buffer gas cooling or decelerators (see list in [28]), an anionic beam can be brought to a standstill very easily by an electric field [41]. Indeed, a typical beam of C − 2 has a current of 1 nA and is emitted at 1000 eV with an energy dispersion of 1 eV [42]. A 1000 volt potential box can thus decelerate such a beam. Furthermore, due to the energy dispersion of the beam, the stability of this voltage power supply is not critical. Typically 1/100000 of the anions (i.e. 0.01 pA current) will be decelerated inside the box to below an energy of 0.01 meV. This corresponds roughly to 0.1 K, which is within the capture range of molasses cooling. Thus, we propose a very simple deceleration scheme using a grounded vacuum chamber through which the 1000 eV beam propagates; this is followed by a chamber at 900 V with a 3 mm radius hole which focuses the beam (1 cm downstream) to a smaller hole of R ∼ 0.3 mm radius of an innermost chamber at 1000 V. The chambers could be made of transparent conducting film (such as Indium Tin Oxide) or as grids to allow laser cooling. In this final chamber the electric field decreases as ∼ 0.2(z/R) −3 along the propagation axis z [43] such that at 1 cm the Lorentz force becomes negligible compared to the laser cooling force that is typically kΓ/10 ∼ 10 −20 N, corresponding to an acceleration of 10 5 m · s −2 . We simulate such a 3D molasses cooling with several CW lasers, all with 0.5 W power and 2 cm waists and wavelengths 10 MHz red detuned from the transitions (Fig. 1). We consider here a cooling on the X(v = 0, N = 2) ↔B(v = 0, N = 0) transition. For the 3D cooling we thus have 6 lasers on the X(v = 0, N = 2) ↔ B(v = 0, N = 0) transition, 2 repumping counter-propagating lasers on the X(v = 0, N = 0) ↔B(v = 0, N = 0), plus extra repumping lasers: 4 lasers for each higher vibrational level of the ground state X up to v = 4 (these lasers are counter-propagating along +/− Z). We start with 150 particles at 80 mK distributed following a spherical gaussian distribution (σ = 4 mm). The result is indicated in Fig. 2(a). The first increase at 1 ms is due to the conversion of the Coulomb potential energy into kinetic energy. Then, a fast cooling down to 1 mK is observed, although many molecules escape from the laser waist interaction area. We find that the final temperature and losses depend on the initial density (initial gaussian radius) because of space charge effects that counter the cooling. Note that we do not attempt trapping but only molasses-cool here. Indeed, realizing a trap (MOT) would require a magnetic-field gradient producing a Lorentz force stronger than the laser trapping one, especially because the Zeeman effect of B(v = 0, N = 0) is weak (quadratic behavior). To be feasible, this cooling method requires pre-cooling down to few hundreds of mK in order to avoid having too fast molecules. In the second simulation we study particles in a Paul trap. For simplicity, we only consider motion in an harmonic pseudopotential well V (r) = 1/2 m ω r (x 2 + y 2 ) + 1/2 m ω 2 z [44]. We chose 5 K as a typical starting temperature of the beam. This would correspond to having carried out a first cooling step, using e.g. Helium as a buffer gas. We simulate Doppler cooling on the X ↔ B transitions, with a 600 MHz red detuning, and a spectral laser bandwidth of 35 MHz. As previously, we consider repumping lasers from the different vibrational levels of the ground state X up to X(v = 4), as shown in Fig. 1(a). But in this case repumping lasers are only along +Z, since particles are trapped. Results are given in Fig. 2(b) where cooling down to 60 mK is achieved within 50 ms. As the photo-detachment cross-section of C − 2 for the B-state is unknown, we use the photo-detachement cross section of C 2 , σ ∼ 10 −17 cm 2 [13] as typical value. The photo-detachement rate for the B-state is Iσ/hν = 4.3 s −1 for I = 0.16 W/cm 2 . Within 50 ms of cooling we loose only 3% of the molecules by photo-detachement. To these photo-detachement losses, we have to add 25% of decays to higher vibrational levels of the X-state. The losses' evolution over time is shown in Fig. 2(b). Here, we would like to emphasize that for both molasses cooling and Paul trap simulations, we use a Doppler laser cooling process, with the same lasers. The Paul trap can thus serve as a first cooling and can be turned off for further cooling using the molasses cooling for low density clouds. Photo-detachement and decays in higher vibrational Xstate levels are thus similar. For the simulations of molasses cooling however, losses due to motion of molecules out of the laser's area are much more important than photo-detachment or decay losses. A solution to avoid the losses of C − 2 through photodetachment or decays into high vibrational levels of the ground-state is to cool through the A-state. This cooling has very similar characteristics (linewidth, wavelength) to those of La − [22] but with a lighter particle and no photo-detachment in the case of C − 2 . Our third simulation thus concentrates on cooling and trapping in a ∼ 1.5 cm long Penning-like trap using Sisyphus-type cooling [28]. The principle is illustrated in Fig. 1(b): due to the axial motion induced by the electric trapping potential (300 µs oscillation time in the simulation) in a Penning-like trap, particles move between the high (2 T) and low (0.2 T) magnetic fields at both ends of their axial range. A given particle is initially in the X(v = 0, N = 0, M = 1/2) state; then, after an absorption followed by spontaneous emission, it decays towards the X(v = 0, N = 0, M = −1/2) state. More than 1 K is removed for each closed absorption-emission cycle, as the molecule continuously climbs the magnetic potential hill. In the last 3D simulation we focus on the axial temperature, set initially at 100 K for a cloud of 200 anions. For our low density plasma there is no coupling between radial and axial motions, but the radial shape of the plasma reflects the inhomogeneous magnetic field. Two cooling lasers at2.5351 µm and 2.5358 µm with 100 MHz spectral bandwidth each are detuned to be resonant at 0.2 T and 2 T, respectively, along the X(v = 0, N = 0, M = −1/2) 0.2T ↔ A(v = 0, N = 1, J = 1/2) 0.2T and X(v = 0, N = 0, M = 1/2) 2T ↔ A(v = 0, N = 1, J = 1/2) 2T transitions. The considered laser power is 0.05 W for a 1 mm waist. To avoid relying on too many lasers, we repump the losses in the X(v = 0, N = 2) states with only one single broadband (6000 MHz, 0.05 W) laser which addresses all the Zeeman-split sub-levels, resonant at 0.2 T. Results are given in Fig.3(a). In tens of ms, the axial temperature is cooled down from 60 K to few K. The lost population mainly goes to X(v = 1). As for all simulations, we load the trap using an initial (non thermalized) Gaussian velocity distribution. The evolution of this non equilibrium system leads to the high frequency velocity (and thus instantanous temperature) fluctuations in both Fig.2(b) and Fig.3. We also present an alternative simulation, of which results are given in Fig. 3(b). Here, decays in the vibrational X(v = 1) states are repumped: two lasers, at 4.574 µm and 4.593 µm, address the X(v = 1, N = 0) ↔ A(v = 0, N = 1, J = 1/2) and X(v = 1, N = 2) ↔ A(v = 0, N = 1, J = 1/2). Both are resonant at 0.2 T, with both spectral bandwidth of 6 GHz and power of 0.1 W. This simulation requires 2 more lasers but has the advantage of cooling a greater part of the molecules (less than 0.5% of anions fall in the X(v = 2) levels, within 80 ms, in comparison to the 60% of losses within 20 ms, for the first case without repumpers on X(v = 1)). In conclusion, we have presented several possible deceleration and laser cooling schemes for anionic molecules, either in free space or trapped, by using Doppler or Sisyphus cooling, circumventing the problem of photodetachment. Working with traps can open many possibilities due to the long trapping times : it could enable restricting studies to only ro-vibrational transitions, or working with electronic transitions that have long spontaneous emission times. Furthermore, laser cooling of molecular anions followed by laser photo-detachment could be used as a source for cold neutral molecules, or as an ideal source for electron bunches, since no Coulomb force due to ions will be present and ideal uniform elliptical density shapes could be realized [45,46].
4,424
2015-05-27T00:00:00.000
[ "Physics" ]
Roles of TRPV1 and neuropeptidergic receptors in dorsal root reflex-mediated neurogenic inflammation induced by intradermal injection of capsaicin Background Acute cutaneous neurogenic inflammation initiated by activation of transient receptor potential vanilloid-1 (TRPV1) receptors following intradermal injection of capsaicin is mediated mainly by dorsal root reflexes (DRRs). Inflammatory neuropeptides are suggested to be released from primary afferent nociceptors participating in inflammation. However, no direct evidence demonstrates that the release of inflammatory substances is due to the triggering of DRRs and how activation of TRPV1 receptors initiates neurogenic inflammation via triggering DRRs. Results Here we used pharmacological manipulations to analyze the roles of TRPV1 and neuropeptidergic receptors in the DRR-mediated neurogenic inflammation induced by intradermal injection of capsaicin. The degree of cutaneous inflammation in the hindpaw that followed capsaicin injection was assessed by measurements of local blood flow (vasodilation) and paw-thickness (edema) of the foot skin in anesthetized rats. Local injection of capsaicin, calcitonin gene-related peptide (CGRP) or substance P (SP) resulted in cutaneous vasodilation and edema. Removal of DRRs by either spinal dorsal rhizotomy or intrathecal administration of the GABAA receptor antagonist, bicuculline, reduced dramatically the capsaicin-induced vasodilation and edema. In contrast, CGRP- or SP-induced inflammation was not significantly affected after DRR removal. Dose-response analysis of the antagonistic effect of the TRPV1 receptor antagonist, capsazepine administered peripherally, shows that the capsaicin-evoked inflammation was inhibited in a dose-dependent manner, and nearly completely abolished by capsazepine at doses between 30–150 μg. In contrast, pretreatment of the periphery with different doses of CGRP8–37 (a CGRP receptor antagonist) or spantide I (a neurokinin 1 receptor antagonist) only reduced the inflammation. If both CGRP and NK1 receptors were blocked by co-administration of CGRP8–37 and spantide I, a stronger reduction in the capsaicin-initiated inflammation was produced. Conclusion Our data suggest that 1) the generation of DRRs is critical for driving the release of neuropeptides antidromically from primary afferent nociceptors; 2) activation of TRPV1 receptors in primary afferent nociceptors following intradermal capsaicin injection initiates this process; 3) the released CGRP and SP participate in neurogenic inflammation. Background The inflammation initiated by release of inflammatory mediators from primary afferent nerve terminals (mainly nociceptors) is referred to as neurogenic inflammation [1,2]. A wide range of inflammatory diseases like allergic arthritis, asthma, dermatitis, rheumatoid arthritis, inflammatory bowel diseases and migraine are suggested to include a neurogenic component [3]. Many studies demonstrate that inflammatory peptides in a population of primary nociceptive neurons are critically important for induction and development of neurogenic inflammation. Experimentally, intradermal capsaicin (CAP) injection induces neurogenic inflammation and is characterized by arteriolar vasodilation, plasma extravasation, and pain (hyperalgesia and/or allodynia) [4][5][6][7][8]. The underlying mechanisms are that CAP sensitizes nociceptors by activating transient receptor potential vanilloid-1 (TRPV 1 ) receptors distributed in small diameter myelinated (Aδ) and unmyelinated (C) primary afferent nociceptive fibers, which leads to the release of inflammatory peptides from these sensitized afferent terminals. It is generally accepted that antidromic activation of afferent nociceptors is the cause of inflammatory peptide release and that dorsal root reflexes (DRRs) play a critical role in this process. DRRs are triggered pathophysiologically by excessive primary afferent depolarization of the central terminals in the spinal dorsal horn [9][10][11], which results from the opening of Clchannels and efflux of Clions from the synaptic terminals of primary afferents when GABA A receptors are activated by GABA released from spinal GABAergic interneurons [11,12]. DRRs are triggered in the spinal dorsal horn by GABAergic interneuronal circuits and conducted antidromically toward the periphery along the primary afferent nociceptive fibers [9,11,[13][14][15][16]. Intradermal injection of CAP to activate TRPV 1 receptors in primary afferent nociceptors can trigger and enhance DRRs [17,18], which are accompanied by flare (vasodilation) and edema (increased paw volume) in the paw [17,19], suggesting that there is a close relationship between enhanced DRRs and neurogenic inflammation presumably elicited by neuropeptide release [20]. The primary afferent fibers critically involved in triggering DRRs are CAP-sensitive fibers [18,21]. Although antidromic activation of primary nociceptive afferent endings (effector function) is well established to be a mechanism of driving the mediator release leading to neurogenic inflammation [22][23][24][25][26], there is no direct evidence to demonstrate that the release of inflammatory substances from nociceptive terminals is due to the triggering of DRRs and how activation of TRPV 1 receptors initiates neurogenic inflammation via triggering DRRs. We hypothesize that the release of inflammatory peptides in the periphery is driven by the generation of DRRs, which contributes to the spread of cutaneous inflammation and to the development of neurogenic inflammation that exacerbates pain perception. This process is initiated by activation of TRPV 1 receptors after CAP injection. To test this hypothesis, we have examined the role of the inflammatory neuropeptides, calcitonin gene-related peptide (CGRP) and substance P (SP), in DRR-mediated neurogenic inflammation by using the rat model of neurogenic inflammation induced by intradermal injection of CAP. Pharmacological and surgical manipulations were used to evaluate the role of DRRs [17,19]. The degree of acute cutaneous inflammation that followed intradermal injection of CAP was assessed by measurements of local blood flow (vasodilation) and paw-thickness (edema) of the rat foot skin. Some preliminary data have been presented in abstract form [27]. Effects of dorsal root reflex removal on capsaicin-and neuropeptide-evoked inflammation Observations on vasodilation and edema evoked by CAP and neuropeptides were made in three groups of rats for each agent. Intradermal CAP-evoked inflammation In a group of rats (n = 7), the animals underwent sham surgery without sectioning the L 2 -S 1 dorsal roots ipsilaterally. An elevated blood flow was seen at a site 15-20 mm away from the CAP injection spot (Fig. 1A) and reached its peak around 15 min after CAP injection (Fig. 1C). The peak increase and the value at 60 min after CAP injection were 388.7 ± 35.7% and 300.9 ± 33.3%, respectively (P = 0.0015 and P = 0.0023, compared with baseline level, one-way RM ANOVA; Fig. 1C). Change in paw-thickness on the side ipsilateral to CAP injection was presented as the difference score before and after CAP injection. In sham-operated group, the difference score of paw-thickness was 1.4 ± 0.2 (P = 0.003, compared with the group with intradermal vehicle injection, Dunnett's test; Fig. 1D). In the dorsal rhizotomized group of rats in which DRRs were removed surgically (n = 7), the enhanced blood flow induced by the same dose of CAP injected was much less than in rats with sham-dorsal rhizotomy (Fig. 1B). The blood flow increased slightly to 165.5 ± 19.9-171.1 ± 23.1% at 15-30 min after CAP injection and then recovered toward the baseline. Peak increase and the value at 60 min after CAP injection were 171.1 ± 23.1% and 159.9 ± 19.0%, respectively (P = 0.025 and P = 0.026, compared with baseline level, one-way RM ANOVA; Fig. 1C), which was much smaller than that in the sham-operated group (P = 0.0052 and P = 0.0063; Dunnett's test; Fig. 1B,C). The difference score of paw-thickness was significantly decreased to 0.82 ± 0.03 (P < 0.05, compared to the sham-operated group, Dunnett's test; Fig. 1D). Data from the group of rats in which DRRs were eliminated pharmacologically by pretreatment with bicuculline intrathecally (n = 7) show results similar to those of dorsal rhizotomized rats (Fig. 1C,D). Peak increase and the value at 60 min after CAP injection were 222.5 ± 41.6% and 164.9 ± 33.4%, respectively, which was much smaller than that in the intrathecal ACSF group (n = 6, P = 0.0082 and P = 0.032, Dunnett's test). Thus, the above data confirm that DRR removal led to an attenuation of the inflammatory reaction [17]. A control experiment has been done on the same model in our previous study by intradermal injection of vehicle (Tween 80 and saline), which did not produce obvious changes in blood flow and edema in the foot skin [17]. In addition, a previous study by our group showed that intradermal injection of CAP into the hindpaw did not significantly increase the blood flow level in the forepaw skin, suggesting that the local blood flow reaction is not the result of a change in systemic blood pressure [28]. Changes in cutaneous blood flow and paw-thickness in the hindpaw of rats following ipsilateral intradermal (i.d.) injection of CAP in the hindpaw and the effects of dorsal rhizotomy (DRZ) and intrathecal bicucuclline (BICU) Figure 1 Changes in cutaneous blood flow and paw-thickness in the hindpaw of rats following ipsilateral intradermal (i.d.) injection of CAP in the hindpaw and the effects of dorsal rhizotomy (DRZ) and intrathecal bicucuclline (BICU). A and B: Samples of the laser Doppler flowmetry traces show changes in cutaneous blood flow in the rat hindpaw following CAP injection and the effects of dorsal rhizotomy. C and D: Mean results of blood flow and paw-thickness recordings summarizing the effects of DRZ and intrathecal BICU on the CAP-evoked inflammation. Blood flow pre-CAP injection was expressed as 100% (dashed line). Change in paw-thickness following CAP injection was presented as the difference score before and after CAP injection. Bicuculline (BICU) or ACSF was given intrathecally 20 min prior to CAP injection. Inset shows the sites where CAP was injected intradermally and blood flow was measured. *: P < 0.05, compared to the value in sham-dorsal rhizotomized or ACSP pretreated group. Intra-arterial CGRP-evoked inflammation In a group of sham-dorsal rhizotomized rats (n = 6), local administration of CGRP by intra-arterial injection produced an increase in cutaneous blood flow in the hindpaw skin without significant change in the paw-thickness (Fig. 2). Blood flow level reached its peak at 20 min after CGRP application. The peak increase and the value of blood flow at 60 min after CGRP injection were 435.2 ± 41.4% and 245.8 ± 17.9%, respectively (P < 0.001 and P < 0.001, compared with baseline level, one-way RM ANOVA; Fig. 2C). However, removal of DRRs either surgically (dorsal rhizotomy, n = 7) or pharmacologically (intrathecal bicuculline, n = 7) produced no significant effects on the CGRP-evoked vasodilation and paw-thick-ness ( Fig. 2B,C,D). In the dorsal rhizotomy group, P was 0.158 for the peak value when compared to sham group, and P was 0.181 for the value at 60 min after CAP injection when compared to sham group. In the intrathecal bicuculline group, P was 0.457 for the peak value when compared to intrathecal ACSF group (n = 6), and P was 0.825 for the value at 60 min after CAP injection when compared to intrathecal ACSF group. Difference score of paw-thickness in dorsal rhizotomized rats was 0.22 ± 0.13, P = 0.181, compared with sham-dorsal rhizotomized rats. Difference score of paw-thickness in the intrathecal bicuculline group was 0.19 ± 0.15, P = 0.198, compared with the intrathecal ACSF group. Changes in cutaneous blood flow and paw-thickness in the hindpaw of rats following ipsilateral intra-arterial (i.a.) injection of CGRP in the hindpaw and the effects of DRZ and intrathecal BICU Intra-arterial SP-evoked inflammation When dorsal roots were intact (sham-dorsal rhizotomy, n = 6), local administration of SP intra-arterially produced a short-lasting vasodilation and substantial edema (Fig. 3). Peak increase was 338.8 ± 38.8% at 5 min after SP application (P = 0.003, compared with the baseline value, oneway RM ANOVA). Consistent with the results of CGRP administration, neither surgical (dorsal rhizotomy, n = 6) or pharmacological (intrathecal bicuculline, n = 6) treatments affected significantly the SP-evoked inflammation ( Fig. 3B, C, D). In the dorsal rhizotomy group, P was 0.954 for the peak value compared with the sham group. In the intrathecal bicuculline group, P was 0.879 for the peak value compared with the intrathecal ACSF group (n = 6). Difference score of paw-thickness in the dorsal rhizo-tomy group was 2.54 ± 0.29, P = 0.196, compared with the sham group. Difference score of paw-thickness in the intrathecal bicuculline group was 1.92 ± 0.56, P = 0.73, compared with the intrathecal ACSF group. To exclude the possibility that the neuropeptide-evoked vasodilation was due to a systemic effect, change of blood flow in the forepaw was monitored simultaneously. The data show that local injection of these neuropepetides in the hindpaw did not produce significant change in blood flow in the forepaw (data not shown). Thus, the differential effects of DRR removal on CAP-and neuropeptide-evoked inflammation indicate a close rela-Changes in cutaneous blood flow and paw-thickness in the hindpaw of rats following ipsilateral i.a. injection of SP in the hind-paw and the effects of DRZ and intrathecal BICU tionship between DRRs and the release of these neuropeptides. Effects of blockade of TRPV 1 , CGRP, neurokinin 1 or CGRP/neurokinin 1 receptors on the capsaicin-evoked inflammation We further examined how the blockade of TRPV 1 , CGRP, neurokinin 1 (NK 1 ), or CGRP/NK 1 receptors in the periphery affected the CAP-evoked inflammation and what differences there were by using dose-response analyses of antagonistic effects. Blockade of TRPV 1 receptors by capsazepine After baseline measurements were taken, either capsazepine at one of three doses (6, 30 and 150 µg) or vehicle was given intra-arterially 10 min prior to CAP injection. The dose-response relationship (Fig. 4) shows that capsazepine produced a dose-dependent antagonism. A low dose (6 µg, n = 6) produced a slight reduction in the flare reaction (peak value was 355.0 ± 32.6%, P = 0.21, compared to the peak value, 429.0 ± 75.8%, in vehicle group, n = 6, Dunnett's test) and in the difference score of paw-thickness (0.89 ± 0.05, n = 6; P < 0.01, compared to the vehicle group, 1.4 ± 0.03, n = 6, Dunnett's test). When the periphery was pretreated with capsazepine at 30 or 150 µg, the inhibition of CAP-evoked inflammation reached a maximum (see Fig. 4). There was no statistical difference in the flare reaction or change in paw-thickness between groups receiving 30 µg (n = 7) or 150 µg (n = 7) of capsazepine. CAP-evoked flare was nearly completely abolished when the dose of capsazepine reached either 30 or 150 µg (Fig. 4). A comparison was further made between groups receiving 30 µg and 6 µg of capsazepine. In 30 µg group, the peak blood flow reaction and difference score of paw-thickness were 134.5 ± 15.4% and 0.36 ± 0.09, respectively, that were significantly lower than those in 6 µg group (P < 0.001 and P < 0.01). Blockade of CGRP receptors by CGRP 8-37 The effect of blockade of CGRP receptors on the CAPevoked inflammation was analyzed by pretreatment of the periphery with 3 doses of CGRP . A dose-dependent inhibition of the CAP-evoked inflammation was seen with pretreatment with 0.4 (n = 6), 2 (n = 6) and 10 µg (n = 7) of CGRP , respectively (Fig. 5). A slight decrease in flare reaction and paw-thickness change was induced by CAP injection when a low dose of CGRP 8-37 (0.4 µg) was given (P < 0.01 and P < 0.01, compared to vehicle pretreatment group, n = 6, Dunnett's test). A further decrease in flare reaction and paw-thickness change was seen when The effects of blockade of TRPV 1 receptors on the CAP-evoked inflammation by pretreatment of the periphery with three dif-ferent doses of capsazepine (CPZ) Figure 4 The effects of blockade of TRPV 1 receptors on the CAP-evoked inflammation by pretreatment of the periphery with three different doses of capsazepine (CPZ). CPZ was given intra-arterially 10 min prior to CAP injection. **: P < 0.01, compared to the value in the group of i.a. injection of vehicle (Veh). ++: P < 0.01, compared to the value with the lowest dose of the same drug. Unlike the blockade of TRPV 1 receptors by capsazepine, blockade of CGRP receptors did not completely inhibit the flare and edema induced by CAP injection (Fig. 5). Since there was no statistical difference in the flare reaction and change in paw-thickness between groups receiving 2 µg or 10 µg of CGRP , the inhibition of CAP-evoked inflammation by either 2 or 10 µg should presumably be maximal. A comparison was further made between groups receiving 2 µg and 0.4 µg of CGRP . The peak blood flow reaction (225.1 ± 7.1%) in 2 µg group was significantly lower than that in the group receiving 0.4 µg of CGRP 8-37 (P = 0.002). The difference score of paw-thickness in 2 µg group (0.95 ± 0.13) were slightly lower than that in the group receiving 0.4 µg of CGRP (1.12 ± 0.05), but the difference did not reach statistical significance (P = 0.246). Blockade of NK 1 receptors by spantide I Similar to the results obtained from the experiments with CGRP 8-37 , a dose-dependent inhibition of CAP-evoked inflammation was seen with pretreatment with 0.4 (n = 6), 2 (n = 6) and 10 µg (n = 7) of spantide I, respectively (Fig. 6), but blockade of NK 1 receptors did not completely inhibit the flare or edema induced by CAP injection. The inhibition of CAP-evoked inflammation by either 2 or 10 µg of spantide I should be maximal because there was no statistical difference in the flare reaction or change in pawthickness between groups given 2 µg and 10 µg (Fig. 6). There was a significant difference both in peak reaction of blood flow and difference score of paw-thickness between groups receiving 2 µg and 0.4 µg of spantide I (P < 0.001 and P < 0.01). The blood flow reaction and difference score of paw-thickness were much lower in the group receiving 2 µg of spantide I. Comparison of their inhibitory effects by these doses shows that vasodilation following CAP injection could be reduced by blockade of CGRP or NK 1 receptors. The CAP-induced edema was also reduced by either blockade of CGRP or NK 1 receptors, but the NK 1 antagonist (spantide I) produced a much stronger inhibition of edema. However, when both CGRP and NK 1 receptors were blocked by co-administration of 10 µg of CGRP and spantide I, inhibition of the CAP-evoked vasodilation became stronger. The peak value (154.1 ± 15.1%) was significantly lower than the peak value in the CGRP 8-37 group (P = 0.035, Dunnett's test). Inhibition of the CAPevoked edema by co-administration of CGRP 8-37 and spantide I was slightly stronger compared to the group with spantide I pretreatment, but did not reach statistical significance. Finally, blockade of TRPV 1 receptors abolished nearly completely the CAP-induced vasodilation and edema. The peak value of vasodilation in the capsazepine pretreated group was 118.3 ± 10.2%, which was statistically significant lower than that in CGRP 8-37 pretreated (P = 0.001, Dunnett's test) and in spantide I pretreated (P = 0.005, Dunnett's test) groups, respectively, but not statistically significant lower than that in CGRP 8-37 +spantide I pretreated group (P = 0.073, Dunnett's test). Difference score of paw-thickness in the capsazepine pretreated group was 0.25 ± 0.12, which was statistically significant smaller than that in CGRP 8-37 pretreated (P < 0.01, Dunnett's test) and in spantide I pretreated (P < 0.05, Dunnett's test) groups, respectively, but not statistically significantly smaller than that in CGRP 8-37 +spantide I pretreated group (P = 0.164, Dunnett's test). Discussion Previous studies by our and other groups on an acute experimental model of neurogenic inflammation evoked by intradermal injection of CAP have physiologically and pharmacologically demonstrated that cutaneous inflammatory reactions characterized by local vasodilation (flare) and edema (increased paw-thickness) are predominantly mediated by triggering DRRs [17,19,20]. DRR activity has been recorded electrophysiologically from the central end of individual Aδ-and C-primary afferents and shown to be enhanced after CAP injection [18,21,29]. In the present study, we have further extended our ongoing project in the following respects. 1) New evidence has The effects of blockade of neurokinin 1 receptors on the CAP-evoked inflammation by pretreatment of the periphery with three different doses of spantide I Figure 6 The effects of blockade of neurokinin 1 receptors on the CAP-evoked inflammation by pretreatment of the periphery with three different doses of spantide I. Spantide I was given intra-arterially 10 min prior to CAP injection. **: P < 0.01, compared to the value in the group of i.a. injection of vehicle (Veh). ++: P < 0.01, compared to the value with the lowest dose of the same drug. i.a. Veh i.a. Spantide I, 0.4 g i.a. Spantide I, 2 g i.a. Spantide I, 10 g been provided to confirm the view that DRRs are triggered and then enhanced by activation of TRPV 1 receptors to evoke neurogenic inflammation by driving the release of neuropeptides (CGRP and/or SP). 2) pharmacological studies using dose-response analyses of antagonism of TRPV 1 and neuropeptide receptors reveal that the released CGRP and SP participate critically in the neurogenic inflammation; 3) activation of TRPV 1 receptors in primary afferent nociceptors following CAP injection initiates this process, including triggering of DRRs. Many primary nociceptive afferent neurons and their axons (Aδ-and C-fibers) are peptidergic with the capacity to release inflammatory peptides [30][31][32][33][34][35]. CGRP and SP are major inflammatory mediators that contribute a neurogenic component to inflammation [36,37]. When released from primary afferent neurons, CGRP and SP produce neurogenic inflammation by interacting with endothelial cells, mast cells, immune cells and arterioles. For instance, CGRP is potent vasodilator that produces a strong and long-lasting vasodilation [38], and SP results preferentially in stronger plasma extravasation [39]. A critical concern addressed in the present study is the mechanism by which inflammatory mediators are released in the periphery to induce neurogenic inflammation. It has been suggested that intradermal injection of CAP results in a local vasodilation, increased plasma extravasation, and hyperalgesia through release of neuropeptides from peripheral primary afferent terminals [11,[40][41][42]. These afferent fibers can be sensitized by CAP due to activation of TRPV 1 receptors, a key nociceptive molecule expressed in these fibers, to contribute to nociceptive transmission and neurogenic inflammation [5,6,[43][44][45]. Thus, CAP plays not only a sensory role by activating nociceptors, but it also has an efferent function by initiating neurogenic inflammation. The latter results from CAP-induced Ca 2+ influx into nerve terminals through TRPV 1 receptors and voltage-dependent Ca 2+ channels, causing the exocytosis of inflammatory mediators [46][47][48][49] and their release into the periphery to produce sensitization of primary afferent nociceptors and neurogenic inflammation [50][51][52][53]. The above process can be modulated by antidromic activation of afferent fibers, which would drive and trigger the release of inflammatory mediators that initiates neurogenic inflammation, because experimentally antidromic activation of the cut dorsal roots can evoke obvious vasodilation and plasma extravasation when the electrical stimulus strength is strong enough to active C-fibers [54][55][56]. In the present study, experiments were designed to determine whether there was a release of CGRP and SP from sensory afferent terminals (nociceptors) and whether this release was antidromically driven by DRRs in the CAP-evoked neurogenic inflammation. We proposed that removal of DRRs would interrupt this pathway to alleviate the neurogenic inflammation induced by CAP injection. The data have shown that local vasodilation and increased paw-thickness evoked by CAP injection were greatly reduced after dorsal rhizotomy or intrathecal bicuculline administration that removed DRRs. In contrast, inflammatory reactions evoked by direct application of CGRP or SP in the periphery that would mimic the DRR-mediated inflammation induced by CAP injection were unchanged under the same conditions when DRRs were removed. Thus, there should be a close relationship between DRRs and the release of these neuropeptides based on the observations of differential effects of DRR removal on CAP-and neuropeptide-evoked inflammation, which suggests that the release of CGRP and/or SP is driven by DRRs to participate critically in the CAP-evoked inflammation. In this process, activation of TRPV 1 receptors appears to be an initial step. Therefore, we wanted to analyze further how neurogenic inflammation was initiated and developed via DRRs by differentiating the roles of TRPV 1 , CGRP and NK 1 receptors. Dose-response analysis of the antagonistic effect of the TRPV 1 receptor antagonist, capzasepine, on the CAPevoked inflammation indicates that vasodilation and edema evoked by CAP injection are inhibited in a dosedependent manner by capsazepine pretreatment. When the dose of capsazepine was in the range of 30-150 µg, the inhibition seemed to reach a maximum. This result is consistent with studies on other pain models that a blockade of TRPV 1 receptors by similar doses of capsazepine antagonized selectively the CAP-evoked hyperalgesia and alleviated other inflammogen-evoked pain behaviors in a dose-dependent manner [57][58][59]. Importantly, CAPevoked inflammation was nearly completed blocked with these doses. This suggests that neurogenic inflammation after CAP injection is initiated by activation of TRPV 1 receptors that in turn trigger and then enhance DRRs, which release inflammatory neuropeptides. Since the mechanism underlying neurogenic inflammation evoked by CAP injection and driven by DRRs seems to be the result of CGRP and/or SP release, we assumed that a blockade of either CGRP or NK 1 receptors in the periphery should alleviate the inflammation. The analysis of antagonistic effects of blockade of CGRP or NK 1 receptors by examining the dose-response relationships when CGRP or spantide I was given as a pretreatment shows that each antagonist when given individually reduced the CAP-evoked inflammation in a dose-dependent manner, but the inflammation was not completely abolished when the effect of each antagonist was maximal. Thus, each neuropeptide released contributes partially to neurogenic inflammation initiated by CAP injection via activation of TRPV 1 receptors. A further analysis of blockade of both CGRP and NK 1 receptors revealed that the CAP-evoked inflammation (prominently vasodilation) was more effectively alleviated by co-administration of CGRP and spantide I compared to the effect of a single antagonist. This suggests that CGRP and SP are two major inflammatory mediators in the neurogenic inflammation initiated by activation of TRPV 1 receptors and driven by triggering of DRRs. In summary, the present results update the role of DRRs in neurogenic inflammation by providing new evidence to suggest that the release of CGRP and SP in the periphery is driven by the generation of DRRs, which participate critically in neurogenic inflammation with that pain perception is exacerbated. Further, this process is initiated by activation of TRPV 1 receptors after CAP injection. Experimental animals Male Sprague-Dawley rats weighing 250-350 g were used in this study. The animals were housed in groups of two to three, with food and water available ad libitum, and were allowed to acclimate under a light/dark cycle for approximately 1 wk prior to experiments. The experiments were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and with the approval of the Institutional Animal Care and Use Committee of the University of Texas Medical Branch. All efforts were made to minimize the number of animals used and their suffering. Rats were initially anesthetized with sodium pentobarbital (i.p. 50 mg kg -1 ) to perform surgery. Anesthesia was then maintained throughout the experiment by continuous intravenous infusion of a saline solution containing sodium pentobarbital. The infusion rate was adjusted (5-8 mg kg -1 h -1 ) depending upon the depth of anesthesia. The depth of anesthesia was judged as being sufficiently deep when withdrawal responses to noxious limb stimulation and/or the eye-blink reflex to air-puffs were absent. Once anesthetic level was adequately established, the animals were paralyzed with pancuronium (0.3-0.4 mg h -1 , i.v.). The rats were then ventilated artificially, and endtidal CO 2 was physiologically kept between 3.5 and 4.5% by adjusting the respiratory parameters. The adequacy of the depth of anesthesia during an experiment was evaluated by the examination of the pupillary reflexes and assessing the stability of the expired CO 2 . Cutaneous blood flow and paw-thickness were measured on anesthetized and paralyzed rats because a series of previous studies on blood flow measurements by our group have been conducted under the same conditions [17,28,60,61]. Rectal temperature was monitored using a rectal probe and maintained at 37°C by a servo-controlled heating blanket. Induction of acute cutaneous inflammation An acute cutaneous inflammation model was induced by intradermal injection of CAP (from Fluka, prepared in a solution of 7% Tween 80 and 93% saline at a concentration of 1%) as previously described [17,28,61]. CAP was injected intradermally into the plantar surface of the foot in a volume of 15 µl. Control experiments were done by vehicle injections using Tween 80 and saline at the same volume as the CAP solution [17]. Measurements of cutaneous vasodilation Blood flow was detected as blood cell flux by a laser Doppler flowmeter (Moor Instruments, UK). The output showing blood flow level was then recorded by a computer data acquisition system (CED 1401 plus, with Spike-2 software) in millivolt units (see panels A &B in Figs. 1, 2 and 3) and also in [17,19,28,60,61]. To measure the cutaneous blood flow level and the local vasodilation (flare) that followed intradermal injection of CAP into the skin of the foot, the probe from the laser Doppler flow meter was attached to the plantar skin surface of the foot with adhesive tape. The flowmeter we used has been reported to produce a laser beam that penetrates to a depth of 500-700 µm below the surface where the probe is placed [62], which assured that the laser Doppler flow probe picked up the blood flow signal mainly from the microvasculature in the dermis. The flare reaction after CAP injection could be detected at distances up to 30 mm away from the CAP injection spot. A number of studies by our group [17,19,28,60,61] have consistently indicated that a large blood flow reaction seen at a distance of 15-20 mm away from the site where CAP was injected is mainly mediated by DRRs. In this study, therefore, we only measured the blood flow changes in the foot skin at a distance of 15-20 mm away from the CAP injection spot (see inset in Fig. 1). Paw-thickness measurements The degree of cutaneous inflammation due to CAP injection was also assessed by paw-thickness measurements to reflect edema due to plasma extravasation. This was done with a digital caliper placed near the site where the laser Doppler probe was placed. Care was taken to assure that the caliper was placed at the same site on the paw for each measurement. Each measurement was the mean value calculated from 3 trials [17]. Surgical and pharmacological elimination of DRRs To evaluate the involvement of DRRs in driving the release of CGRP and/or SP to contribute to neurogenic inflammation, inflammation was evoked under conditions that DRRs were eliminated surgically or pharmacologically. Dorsal rhizotomy This was done to eliminate DRRs surgically. Laminectomy was performed to expose the dorsal roots of segments L 3 -S 1 bilaterally. The exposed cord and roots were protected from drying and cooling by formation of warmed oil pool between skin flaps. The dorsal roots that needed to be sectioned were gently dissected, and a small piece of cotton containing 2% lidocaine was applied to them at the site where the roots were to be cut to minimize injury discharges. Intrathecal administration of bicuculline This was done to eliminate DRRs pharmacologically [12,17,63,64]. The suboccipital region was exposed by a midline incision; the dura over the cisterna magna was opened with a small vertical incision, and a catheter (32G, from Micor, Allison Park) was advanced through a guide cannula to the spinal subarachnoid space at the T 12 -L 1 vertebral level for intrathecal administration. Five µg of bicuculline (a GABA A receptor antagonist from Sigma-Aldrich) dissolved in artificial cerebrospinal fluid (ACSF) in a volume of 15 µl, was injected intrathecally. A previous study has demonstrated that 5 µg of bicuculline administered intrathecally can effectively block DRRs and CAP-evoked inflammation [17,60]. Peripheral administration of agonists and antagonists of inflammatory peptide receptors and antagonist of TRPV 1 receptors Close-by intra-arterial injections were used to deliver drugs to the periphery [28,29,45,61]. To do this, one branch of the femoral artery on the side of blood flow measurement was carefully isolated from connective tissue and ligated proximally. The artery was then cannulated distally by a small sized polyethylene tube that was connected with a Hamilton syringe. Drugs were given intra-arterially in a volume of 10 µl. Experimental protocol 1. To determine whether the release of CGRP or SP from sensory afferent terminals (nociceptors) was driven by DRRs and the role in the CAP-evoked neurogenic inflammation, the spread of flare and edema in the plantar skin of the foot on the side ipsilateral to local injection of CAP (1%, 15 µl), CGRP (from Tocris, 1.0 µg) or SP (from Toc-ris, 0.1 µg) were measured. CAP was injected intradermally, and CGRP or SP was injected intra-arterially. Solutions of CGRP and SP were made with saline (pH corrected to 7.2-7.4). After local injection of CAP, CGRP or SP, changes in blood flow and paw thickness were recorded and monitored for 1-1.5 hr, and the effects were compared to the effects of the same agents evoked under the conditions when DRRs were removed surgically (dorsal rhizotomy) or pharmacologically (intrathecal administration of bicuculline at a dose of 5 µg). Dorsal rhizotomy (L 2 -S 1 ) was performed on the side ipsilateral to the injection on the day when the experiment was conducted. 2. The effects of blockade of TRPV 1 , CGRP, NK 1 , or both CGRP and NK 1 receptors on the CAP-evoked neurogenic inflammation were analyzed pharmacologically. After control values of blood flow and paw thickness were recorded, three doses of each antagonist (capsazepine, CGRP , or spantide I) were given intra-arterially in different groups of animals 10 min prior to CAP injection. These included the TRPV 1 receptor antagonist, capzazepine (from Tocris) at doses of 6, 30 and 150 µg [57][58][59]; the CGRP receptor antagonist, CGRP 8-37 (from Tocris), at doses of 0.4, 2.0 and 10.0 µg [65,66] and the NK 1 receptor antagonist, spantide I (from Tocris) at doses of 0.4, 2.0 and 10.0 µg [67,68]. Capsazepine was dissolved in vehicle made from 10% DMSO and 90% saline. CGRP and spantide I were dissolved in saline. The changes both in blood flow and paw thickness were monitored for 1-1.5 h following CAP injection. The inhibition of the CAP-evoked inflammation induced by the highest dose of each antagonist or co-administration of CGRP and NK 1 receptor antagonists were compared among groups of capsazepine, CGRP 8-37 , spantide I and CGRP 8-37 /spantide I pretreated animals. In separate groups, vehicle used for making the solution of each antagonist was injected prior to CAP injection for control purposes. Data analysis All data are expressed as mean ± S.E. Baseline blood flow level (pre-CAP) was expressed as 100% and percentage changes after CAP injection were compared for groups of animals that received different treatments. A change in paw-thickness following CAP injection is presented as the difference score before and after CAP injection and compared for the groups of animals that received different treatments. Statistical differences between groups were determined by one-way ANOVA followed by the Dunnett's analysis. Data obtained before and different time points after CAP injection were compared using one-way repeated measures ANOVA followed by Student t-tests. P < 0.05 was considered statistically significant. Publish with Bio Med Central and every scientist can read your work free of charge
8,160.6
2007-01-01T00:00:00.000
[ "Biology", "Medicine" ]
The Implementation of Profit Sharing at Lembaga Perkreditan Desa Avoiding distortion of LPD funds requires transparency of financial reports and revenue sharing. Given that there are still frequent irregularities in the use of funds by the organizers. The purpose of this study was to determine the implications of financial performance and transparency on profit sharing. This research used quantitative methods from a population of 142 selected 60 samples. The data were collected by distributing questionnaires and collecting data on the number of samples in this study as many as 60 LPDs selected based on the stratified random sampling method. This study was analyzed using SEM (Structural Equation Modeling) analysis techniques with the PLS (Partial Least Square) method. Based on the first hypothesis, it shows that the relationship between financial performance variables and profit sharing shows the parameter coefficient value of 0.137 with a t value of 1.350. This value is smaller than the t table (1,960). The results of testing the second hypothesis indicate that the relationship between the transparency variable and profit sharing shows the parameter coefficient value of 0.724 with a t value of 8.179. This value is greater than the t table (1,960). Based on the research results, it can be concluded that financial performance has a positive relationship with profit sharing, and transparency has a positive relationship with profit sharing. Introduction Lembaga Perkreditan Desa (LPD) is a microfinance institution that develops and is established in villages in regencies, cities (Wilantinni & Wirakusuma, 2019;Lestari, 2017). LPD as a microfinance institution is an institution that provides financial services to small and micro entrepreneurs as well as low-income people who are not served by formal financial institutions. This is in line with the purpose of establishing the LPD. The purpose of establishing an LPD is to improve the economy of rural communities where the LPD has several advantages, namely priority loans for people who want to start a business and the economy is low (Wilantinni & Wirakusuma, 2019;Setia Dewi & Suartana, 2018). Low interest rates, loan time only 1-5 years, adjusted for the size of the loan, certain loans can be submitted without collateral, and are not subject to administrative fees. To maintain the existence of the LPD, the LPD must provide financial reporting services in a timely manner. Submitting reports on activities, developments and liquidity of LPD regularly every month and reports on health level every 3 months to the Supervisor, BPD, PLPDK, and apparatus (Prajuru) of traditional villages. However, the reality on the ground is different from what was expected where many LPDs in Bali experienced a crisis period and some even went bankrupt. Many factors that influence this can occur, for example, there is a large loan that is not repaid, and the expenditure is too large compared to the income generated and the accounting process is not transparent (Andreana & Wirajaya, 2018). This statement is supported by the fact that there are still 91 LPDs in Badung that have not been published. The audit results are purely for the internal benefit of the LPD. If this problem is ignored, there will be considerable losses to the existing LPDs, even if there is an act of distorting LPD funds. To avoid distortion of LPD funds, transparency in financial reports and profit sharing is required. Disclosure of the calculation of the rate of return and allocation of profits is very important to prevent financial institutions from manipulating earnings (Syanthi et al., 2017;Ningsih, 2015). On the other hand, increased transparency of bank financial conditions will also reduce information asymmetry so that market players can provide fair judgments and promote market discipline (Umiyati, 2020;Rodoni & Yaman, 2018;). Adequate disclosure is the minimum level that must be met so that the financial statements as a whole are not misleading for the purpose of directed decision making. Fair or ethical disclosure is a level that must be achieved so that all parties receive the same treatment or information services. Full disclosure requires complete presentation of all information related to decision making. Being an LPD administrator must be honest if there will be legal sanctions as regulated in the PERDA (2012) concerning Administrative Sanctions, Investigations and Criminal Provisions. Apart from transparency in financial statements, transparency is also required in the profit-sharing process. Profit sharing consists of two words, namely profit sharing and profit sharing. Sharing means cutting, dividing, breaking from the whole. Meanwhile, profit is the result of an act, whether intentional or not, both beneficial and harmful (Wibowo et al., 2018). Profit sharing is a system that includes the procedures for sharing the results of the business between the fund provider and the fund manager (Rahmawaty & Yudina, 2015). In the economic dictionary, profit sharing is defined as profit sharing. Transparency and low bank performance have a negative effect on the implementation of the profit sharing system at the bank (Anggayana & Wirajaya, 2019;Sawitri & Ramantha, 2018). Financial performance is a description of the achievement of the success of a company which can be interpreted as the results that have been achieved for various activities that have been carried out (Setia Dewi & Suartana, 2018;Hartono, 2015). It can be explained that financial performance is an analysis carried out to see the extent to which a company has carried out using financial implementation rules properly. One of the models used in measuring company performance, especially LPD is the CAMEL financial ratio model. This model is also used by conventional banks to measure performance. The CAMEL model is an official measuring tool that has been established by Bank Indonesia to calculate the health of Islamic banks in Indonesia CAMEL and is a factor that greatly determines the health predicate of a bank (Sari, 2019;Murdiati & Purwanto, 2014;Ratnaputri, 2013). The health of a bank is the ability of a bank to carry out normal banking operations and is able to fulfill all of its obligations properly in ways that are in accordance with applicable banking regulations and the assessment of bank health includes 4 criteria, namely a credit score of 81 to 100 (healthy), credit scores of 66 to 81 (fairly healthy), credit scores of 51 to 66 (unhealthy), and credit scores of 0 to 51 (unhealthy) (Sari, 2019). Based on the results of research conducted by (N. P. M. Ch. Dewi & Dewi, 2020) which stated that the effectiveness of the accounting information system and the user's technical capabilities have a positive effect on individual performance. Other research was also carried out by (Sari, 2019) stated that the results obtained are CAR, NPL, BOPO, NIM, ROA, ROE and LDR, PT. Bank Tabungan Negara, Tbk was declared unhealthy due to a decrease in the term management, in addition, from the side of profitability (ROA and ROE) there was also a decrease which caused banks to form allowance for impairment losses due to an increase in the ratio of non-performing loans or bad credit. Then the research conducted by (Anggayana & Wirajaya, 2019) which stated that the principles of good governance which consist of transparency, accountability, responsibility, independence and equality partially have a positive effect on financial performance, while organizational culture has no effect on financial performance. The difference between this study and previous research is 1) research conducted by (N. P. M. Ch. Dewi & Dewi, 2020), the research is the effectiveness of the accounting information system and the technical ability of the user on the individual performance of the Village Credit Institution, 2) research conducted by (Sari, 2019) This research analyzed the health of the bank using the CAMEL method, 3) research conducted by (Anggayana & Wirajaya, 2019) This study analyzes the principles of good governance and organizational culture on the financial performance of the Denpasar City Village Credit Institution. The purpose of this study was to analyze the implications of financial performance and transparency on the profit sharing that occurred in the LPD Buleleng Regency. So that the problem to be examined is financial performance and transparency have an effect on profit sharing. Researchers are interested in doing research in LPD Buleleng Regency because the Buleleng Regency area is part of the Province of Bali. Buleleng Regency consists of nine sub-districts, namely: Tejakula, Kubutambahan, Sawan, Buleleng, Sukasada, Banjar, Seririt, Busungbiu, and Gerogak Districts. The research was only conducted on LPDs that were still active in Buleleng. Methods In the early stages the researchers collected research data. The research data were collected by means of documentation and questionnaires distributed to 142 LPDs in Buleleng Regency. Furthermore, checking the feasibility of the questionnaire and testing the validity and reliability. And the data is tabulated according to the research variables. The data analysis tool used was SEM PLS. Finally, the research results are formulated and conclusions are drawn. The sampling method used is probability sampling technique with the aim of providing equal opportunities for each member of the population. The LPD that will be the target of the research sample uses an error rate of 10%. To determine the number of samples that represent the population, calculations are performed using the Slovin formula. Researchers in collecting this data using questionnaires and documentation techniques. Documentation is used to collect in the form of financial reports and credit data held by the LPD. Furthermore, the questionnaire is used to collect data in the form of answers from respondents. The questionnaire data collection technique uses a Likert scale with intervals of 1 to 5. The questionnaires will be distributed to 142 LPDs in Buleleng Regency. After the questionnaire was collected, the researcher tested its validity and reliability. The reliability test used Cronbach's Alpha, while the validity test was carried out by testing the correlation between the item score and the total score or comparing the correlation value (r count) with r table. The SEM (Structural Equation Modeling) analysis technique with the PLS (Partial Least Square) method was used to analyze the data in this study (Ghozali, 2015). The data analysis stage is the first stage was to evaluate the measurement model (outer model). Test the validity and reliability by performing Convergent Validity, Discriminant Validity Composite Reliability, Cronbach's Alpha and AVE. Formative indicators were tested with weight significance and multicollinearity. The second stage was to evaluate the structural model (inner model). Evaluation in this model will be seen from the coefficient of determination (R2), predictive relevance (Q2), and goodness of git index (GoF). And performed a hypothesis test which was seen from the t-statistic value and the probability value. To test the hypothesis using statistical values, for alpha 5% the t-statistic value used is 1.96. Results and Discussions In this research, there were three constructs consisting of two exogenous variables, namely exogenous: financial performance (indicators: capital, assets, management, earnings and liquidity) and transparency (indicators: document readiness and accessibility, clarity and completeness of information, openness of processes and frameworks of regulations that guarantee transparency). Endogenous: Profit Sharing (indicators: level of investment, amount of available funds, and comparison of profit sharing, determination of income and expense items and accounting policies) Assess the Outer Model or Measurement Model The construct was said to have high reliability if the Composite Reliability value is above 0.70, the Cronbach's Alpha value is above 0.60, rho_A is above 0.70, and AVE is above 0.50 (Ghozali, 2015). Table 1 shows that all the constructs in this study produce a Composite Reliability value above 0.70 and a Cronbach's Alpha value above 0.60. The lowest score for Composite Reliability is in the Transparency construct with a value of 0.853 and Cronbach's Alpha in the Transparency construct with a value of 0.785. It can be concluded that the constructs in this study are reliable. Furthermore, the AVE value is above 0.5 for all constructs contained in the research model. The lowest AVE value is 0.594 in the transparency construct, so it can be concluded that the construct in this study is valid. The rho_A value is above 0.70 for all constructs. Measuring the magnitude of the correlation between constructs and latent variables can also be seen in Figure 1. The path diagram is as follows: Source: Data processed Picture 1. Score of Bootstrapping Structural Model Testing (Inner Model) In assessing the model with PLS, it starts by looking at the R-square for each dependent latent variable (Ghozali, 2015). Table 2 is the estimation result of R-square using SmartPLS Based on the coefficient of determination above, it is known that the R-Square value of Profit Sharing is 0.697, the magnitude of the R-Square number is 0.697 which is equal to 6.97%, and can be explained by the Financial Performance construct variable and Transparency. Hypothesis testing The basis used in testing the hypothesis is the value contained in the output result for inner weight. Table 3 provides the estimated output for testing the structural model Table 3, the results of testing the first hypothesis indicate that the relationship between financial performance variables and profit sharing shows the value of the parameter coefficient of 0.137 with a t value of 1.350. This value is smaller than the t table (1,960). The results of testing the second hypothesis indicate that the relationship between the transparency variable and profit sharing shows the parameter coefficient value of 0.724 with a t value of 8.179. This value is greater than the t table (1,960). Financial performance affects the Profit Sharing Financial performance is an analysis carried out to see the extent to which a company has implemented proper and correct financial implementation rules (Anggayana & Wirajaya, 2019;As'ari, 2017;Rahmawaty & Yudina, 2015). Meanwhile, profit sharing is a profit sharing system in which the owner of the capital works together with the executor of the capital to carry out business activities (Sudarsono & Saputri, 2018;Santoso, 2016;Trishananto, 2016;Wardiah & Ibrahim, 2013). Performance shows something related to the strengths and weaknesses of a company. These strengths are understood so that they can be used and weaknesses must be identified so that corrective steps can be taken. Bank performance can be measured by analyzing and evaluating financial statements. Performance is an important thing that must be achieved by every company anywhere, because performance is a reflection of the company's ability to manage and allocate its sources of funds. The results of testing the first hypothesis indicate that the relationship between financial performance variables and profit sharing shows the value of the parameter coefficient of 0.137 with a t value of 1.350. This value is smaller than the t table (1,960). This is because the profit sharing obtained is determined based on the success of the fund manager to generate income. The ratio that describes the ability of a bank to manage the funds invested in all assets that generate income is ROA. If the ROA increases, the bank's income will also increase, with the increase in bank income, the profit-sharing rate received by customers will also increase. Thus it can be said that the higher the ROA, the higher the profit sharing received by the customer (Apriyantari & Ramantha, 2018;Ismanto, 2018;Rahmawaty & Yudina, 2015). Previous research examined the existence of an indirect relationship between performance and profit allocation. In other words, the researcher argues that corporate governance mechanisms affect company performance which affects profit distribution. The results of testing the first hypothesis indicate that the relationship between financial performance variables and profit sharing shows the parameter coefficient value of 0.137 with a t value of 1.350. This value is smaller than t table (1,960). These results indicate that financial performance has no effect on profit sharing. This is because the Village Credit Institutions provide more benefits in accordance with the provisions already owned by the LPD. This is because many LPDs in Bali are experiencing a period of crisis and some even go bankrupt. The factors that influence this are the presence of a sizeable loan from the borrower and not returned, as well as expenses compared to the income generated and the bookkeeping process that is not transparent. This is supported by a statement (Riyadi & Yulianto, 2014) the cause of the negative relationship between profit sharing financing on ROA is that the first customer who has received profit sharing financing from the bank does not necessarily return the funds obtained from the bank in the same year, then the second is because not necessarily all customers are obedient in returning the funds obtained from bank. In addition, there is research conducted by (Arfiani & Mulazid, 2017) which stated that the Inflation variable has no influence on the Profit Sharing Rate variable. The results of this study are in line with research conducted by (Apriyantari & Ramantha, 2018) which stated that earning assets have an effect on financial performance, capital adequacy has a positive effect on financial performance, and LDR has a positive effect on financial performance and this study also found that NPLs were able to moderate the effect of earning assets, and LDR on financial performance. Then the research conducted by (Ismanto, 2018) which stated that customer relationships and product quality produced by UKMs have a positive effect on customer satisfaction, while consumer satisfaction has a positive effect on the financial performance of UKMs in Jepara Regency and UKM's owners believe that having good customer relationships causes consumers to be satisfied with UKMs and will maintain loyalty to buy returns the resulting products so as to improve the financial performance of UKMs. Effect of Transparency on Profit Sharing Transparency is important to maintain people's trust in LPDs. Transparency, namely information is given without being covered. The LPD is a financial institution that deals with the community (krama Desa Pakraman), so it requires disclosure of financial information that can be accessed by the public in the form of community supervision of the LPD being considered. The application of the principle of transparency to the LPD can provide transparency of information about the condition of the LPD, while the application of the principle of accountability to the LPD can increase the trust of village manners to the LPD manager (Anggayana & Wirajaya, 2019). The hypothesis shows that the relationship between transparency and profit-sharing variables shows the parameter coefficient value of 0.724 with a value of 8.179. This value is greater than t table (1,960). This means that transparency has a positive and significant relationship to Revenue Sharing. This is because with transparency, all decision-making processes and provision of material information run well and always prioritize the principle of openness. Prioritizing the principle of openness or transparency is very well established to make customers have a sense of trust in financial institutions. Transparency in a business is very important because with transparency, the public or krama will have confidence in a business activity or financial institution to entrust assets that are owned, whether in the form of goods or money (Nopiani et al., 2020;Iswahyudi et al., 2016). By developing an accurate accounting information system, it can increase transparency in financial management so that the information generated can help manage data more quickly, effectively and efficiently (K. C. Dewi et al., 2018). LPD must carry out transparency, it is hoped that it can increase the trust of the village community (Krama Desa Pakraman) in the LPD. This is because disclosing the calculation of the rate of return and allocation of profits is very important to prevent financial institutions from manipulating profits (Syanthi et al., 2017;Ningsih, 2015). On the other hand, increased transparency of bank financial conditions will also reduce information asymmetry so that market players can provide fair judgments and promote market discipline (Umiyati, 2020;Rodoni & Yaman, 2018;). The results of this research were in line with research conducted by (Anggayana & Wirajaya, 2019) which stated that transparency has a positive effect on financial performance, accountability has a positive effect on financial performance, responsibility has a positive effect on financial performance, independence has a positive effect on financial performance, equality has a positive effect on financial performance, organizational culture has no effect on the financial performance of Village Credit Institutions in Denpasar City. Then the research conducted by (Sawitri & Ramantha, 2018) which stated that transparency has a positive and significant impact on the performance of the Denpasar City Rural Bank. Conclusion Based on SEM (Structural Equation Modeling) analysis with the PLS (Partial Least Square) method, it is known that financial performance has no effect on the profit sharing of LPD in Buleleng Regency. LPD has its own Profit-Sharing method, where the profits generated will be used for the benefit of the village community (Krama Desa Pakraman). Second, transparency has a positive and significant relationship to the Profit Sharing at LPDs in Buleleng Regency. The Village Credit Institution must carry out transparency, it is hoped that it can increase the trust of the village community (Krama Desa Pakraman) in the LPD.
4,827.2
2020-01-01T00:00:00.000
[ "Business", "Economics" ]
Improving traffic accident severity prediction using MobileNet transfer learning model and SHAP XAI technique Traffic accidents remain a leading cause of fatalities, injuries, and significant disruptions on highways. Comprehending the contributing factors to these occurrences is paramount in enhancing safety on road networks. Recent studies have demonstrated the utility of predictive modeling in gaining insights into the factors that precipitate accidents. However, there has been a dearth of focus on explaining the inner workings of complex machine learning and deep learning models and the manner in which various features influence accident prediction models. As a result, there is a risk that these models may be seen as black boxes, and their findings may not be fully trusted by stakeholders. The main objective of this study is to create predictive models using various transfer learning techniques and to provide insights into the most impactful factors using Shapley values. To predict the severity of injuries in accidents, Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Residual Networks (ResNet), EfficientNetB4, InceptionV3, Extreme Inception (Xception), and MobileNet are employed. Among the models, the MobileNet showed the highest results with 98.17% accuracy. Additionally, by understanding how different features affect accident prediction models, researchers can gain a deeper understanding of the factors that contribute to accidents and develop more effective interventions to prevent them. Introduction The number of automobiles on the road is increasing drastically due to the rapid growth of today's society.Traffic accidents have also grown, resulting in massive human and economic costs [1].Every year, a substantial number of people are injured or killed in car accidents across the world, resulting in huge human and financial losses.To successfully reduce deaths and damages caused by road traffic accidents (RTAs), it is critical to understand the causes of such incidents as well as the severity of the injuries.The increasing complexity of road infrastructure, along with the growing number of cars on the road, necessitates a data-driven approach to studying accident trends and identifying possible risk factors.It is critical to continue investigating and understanding the causes of RTAs, as well as applying effective ways to limit their occurrence and severity [2,3]. According to a recent World Health Organization (WHO) study on global road safety, traffic accidents are responsible for more than 1.19 million deaths each year and automobile accidents are the leading cause of mortality among young people and teens [4,5].The severity of traffic accidents is a significant indicator of traffic accident injury.There are a variety of elements that contribute to traffic accidents of varying severity [6,7].In the last 20 years, no substantial reduction in traffic accident fatalities and injuries has been observed.Predictive models can help researchers proactively address accident factors, potentially reducing fatalities, saving costs, and enhancing understanding.The authors discussed weather conditions on different types of roads [8,9].Other important factors include lighting conditions, first road class and number, and number of vehicles [10]. The central goal of accident data analysis is to identify the key factors that influence the occurrence of road traffic accidents, ultimately addressing critical road safety issues.The effectiveness of accident prevention strategies predominantly relies on the authenticity of the gathered and estimated data and the suitability of the chosen analysis methods [11,12].Choosing the appropriate data analysis method is crucial for revealing the causes of accidents in specific zones or study locations and for reasonably accurately predicting the likelihood of daily accident occurrences or assessing the safety levels for different groups of road users in that area [13].Consequently, the quality of the research relies on the selection of suitable methods.Machine learning approaches have been employed by authors to predict traffic accidents [14,15].Zhang et al. [16,17] used the generalized random forest to estimate heterogeneous treatment effects in road safety studies, providing local authorities and policymakers with more complete information and improving the efficacy of speed camera programs.Some researchers applied statistical methods [18,19], reinforcement learning approaches [20,21], hybrid models [22,23] and deep learning models [24].A deep convolutional neural network and random forest are employed for the accident risk prediction method in [25,26]. Many researchers have attempted to investigate accident-contributing elements; however, little work has been given to explaining black box models [27,28].The authors applied five machine learning models and explainable machine learning [29,30].The primary goal of this research is to develop an accident injury severity prediction model based on a transfer learning approach and to identify major contributing elements utilizing an explainable approach.The US accident dataset (2016-2021) is utilized for predicting traffic accident severity.This study aims to develop an automated system for categorizing accident severity.In brief, this study makes the following noteworthy contributions: • A MobileNet model based on transfer learning is employed, showcasing exceptional accuracy in the prediction of road traffic accident severity. • The significance of various features is demonstrated through the utilization of the SHapley Additive exPlanations (SHAP) model. • The proposed model is also tested on another dataset to prove the generalizability of the model. The structure of this study is as follows: Section 'Related Work' offers an overview of prior research in this field.Section 'Dataset & Methodology' introduces the proposed approach and describes the deep learning and transfer learning models.Section 'Results and Discussion' presents the assessment of the proposed approach, including experimental results and related discussions.Lastly, the Section 'Conclusion' serves as the conclusion for this study. Related Work Machine learning has gained popularity in forecasting accident severity in recent years due to its capacity to uncover hidden connections and produce more accurate findings than traditional statistical approaches.Traditional statistical approaches for predicting accident severity include disadvantages such as low accuracy and unrealistic assumptions.Machine learning and deep learning approaches have been used by researchers to improve the effectiveness of the prediction tool.This section offers an overview of some of the prior methodologies used to forecast the severity of traffic accidents. In terms of traffic accident characteristics, Gan et al. [31] used a random forest method to identify eight traffic accident data attributes to predict the degree of traffic accident validation.Engine capacity, hour of day, vehicle age, month of year, day of week, age range of drivers, vehicle movement, and speed restriction are all factors to consider.The Light-GBM model was 87% accurate.In the second research [32], the authors assessed the efficiency of several machine learning models such as Naive Bayes (NB), Random Forest (RF), adaptive boosting (ADA), and Logistic Regression (LR) in predicting injury severity for road accidents.The RF model had the greatest accuracy rate of 75.5%.Bharti et al. [33,34] applied deep learning models to predict traffic flow.Yadav and Redhu [35] presented an enhanced car following model by analyzing traffic density and jam. In Saudi Arabia, Aldhari et al. [36] proposed a machine learning-based approach for predicting the severity of road accidents.The system used three machine learning models, RF, LR, and XG-Boost, and used SHAP to solve bias concerns.Experiments are carried out in two modes: binary class classification and multi-class classification.In the first case, XG-Boost had the greatest accuracy score of 71%, while in the second situation, XG-Boost had the best accuracy score of 94%.Sameen and Pradhan [37] suggested a method for forecasting the severity of two accidents using deep learning models such as multi-layer perceptron (MLP), Bayesian linear regression (BLR), and recurrent neural network (RNN).According to research 5, the RNN model obtained an accuracy of 71.77%. A basic CART model was suggested in the study [38,39] to predict the severity of motorcycle accidents.In addition, the Partial Decision Trees (PART) and MLP models were used in the research.The relevant elements associated with the severity of motorcycle collision injuries were also discovered.According to the data, the CART model had an accuracy score of 73.81%, while the PART model had a score of 73.45%.Lin et al [40] suggested a deep learningbased system for traffic accident prediction for the Internet of Vehicles.The authors employed learning models such as DNN, DT C4.5, NB, deep belief network (DBN), MLP, and Bayesian network to predict accident risk.The study's findings revealed that the DNN outperformed other models and performed well for stage one and stage two clustering. Jamal et al. [41] introduced a network that uses a variety of machine learning models, including RF, LR, DE, and XGBoost, to increase prediction accuracy of road accident severity.The authors discovered that the XGBoost model outperformed other models in terms of individual class accuracy and overall prediction performance.Furthermore, the authors discovered particular elements that have a substantial influence on the severity of traffic accidents using feature importance analysis.The suggested XGBoost model scored an outstanding 93% accuracy.To discover the relevant elements for road accident severity, the author [42,43] proposed RFCNN, an ensemble learning model that integrates machine learning and deep learning.According to their study, the proposed RFCNN model has achieved a good accuracy score on the 20 most relevant characteristics.Bahiru et al. [44] examined the performance of numerous machine learning methods, including ID3, NB, J48, and CART.The accuracy of the J48 machine learning model was found to be 96% in the research. Cicek et al. [29] applied several machine learning models with explanations to predict the severity of accidents.Authors applied deep learning for multitasking and predicted severity levels of traffic accidents with explanations [27].They performed experiments on a Chinese dataset for traffic accidents.Existing literature indicates that many researchers have employed machine learning and deep learning to predict the severity of traffic accidents.However, a limited number of studies have conducted comparative analyses of the performance of various deep-learning methods.Furthermore, very little research has investigated the exploration of contributing factors using explanations.Explanation of models enhances transparency, interpretability, explanatory capacity, domain knowledge integration, and scientific coherence of models [45].This is particularly significant because the majority of prediction methods are commonly regarded as black boxes.Therefore, this study compares five distinct transfer learning methods to investigate their respective predictive capabilities.In a novel contribution, an explainable technique with the proposed model is applied to forecast the most influential factors contributing to accidents in the proposed models.A summary of prior studies is presented in Table 1. Dataset & methodology The dataset used, the deep learning models and transfer learning models used, as well as the parameters for evaluating the performance of these models for the prediction of the severity of traffic accidents, are all covered in detail in this section of the study.The framework adopted in the experiment is presented in Fig 1. Dataset This research makes use of accident data records spanning five years (2016-2020) from New Zealand, which were obtained from the Crash Analysis System (CAS) maintained by the Te Manatu Waka Ministry of Transport.The dataset is also accessible through the open data portal.Two data sets were acquired from the CAS system, encompassing information about individuals involved, vehicles, and accident details.These two datasets, known as the 'person' dataset and the 'accident' dataset, were merged to create a comprehensive dataset focusing on factors contributing to accidents.Initially, the combined dataset contained 378,820 rows and 101 columns.However, several columns, out of the 101, were excluded from the study due to their lack of relevance to accident-causing factors.For example, a column containing information about nearby police stations was deemed unnecessary for this research.Consequently, 36 features related to various aspects of accidents are selected.These encompass crash type, crash 2. Multilayer Perceptron (MLP) The multilayer perceptron model [46] is a significant improvement over the original perceptron model by Rosenblatt.While the perceptron was limited to handling linearly separable problems in basic logic, the multilayer perceptron introduces multiple layers of functional neurons, making it capable of addressing nonlinear separable problems.The architecture consists of fully interconnected layers, allowing for the organized flow of information.It uses the error back-propagation algorithm to train, minimizing the cumulative error on the training set, typically measured using mean-square error (MSE) for each sample. Convolutional Neural Network (CNN) CNN [47] is a deep neural network designed for image recognition, classification, and segmentation.It employs convolution, non-linear activation, and pooling layers to extract features.Stacked CNNs are used for specific tasks, such as detecting parasites in infected cell images.CNN's architecture is multi-layered, with each layer applying filters or kernels to input data to create feature maps.The output of convolutional layers is concatenated and fed into fully connected layers for further analysis.CNN has become a standard in medical domain classification and employs secure interaction protocols for privacy-preserving feature extraction. Long Short-Term Memory (LSTM) Long Short-Term Memory (LSTM) [48] is a specialized recurrent neural network (RNN) architecture developed to address the limitations of traditional RNNs in handling long-term dependencies within sequential data.LSTMs are particularly well-suited for a variety of tasks involving sequences, including natural language processing, time series analysis, speech recognition, and more.Key features of LSTMs include their capacity to mitigate the vanishing gradient problem in standard RNNs, memory cells that allow for information storage and erasure, and gating mechanisms (input, forget, and output gates) to regulate data flow.These networks employ activation functions to analyze incoming data and train using "Backpropagation through Time" (BPTT).LSTMs find applications in a wide range of domains, from language modelling and stock price prediction to speech recognition and image captioning.Variants like Bidirectional LSTMs and simpler Gated Recurrent Unit (GRU) networks have also been introduced.LSTMs have proven highly effective in modelling sequential data, making them a EfficientNetB4 EfficientNetB4 [50] is a member of the EfficientNet family of neural networks, known for their exceptional performance in image classification tasks while remaining computationally efficient.It strikes a balance between model size, computational requirements, and accuracy.Effi-cientNetB4 employs a systematic approach to scale neural network architectures, achieving an ideal balance of depth, width, and resolution through compound scaling.It uses depth-wise separable convolutions and squeeze-and-excite blocks to enhance efficiency and feature capture.This model demonstrates top-tier accuracy on benchmarks like ImageNet while being computationally efficient.EfficientNetB4 is widely used for transfer learning, where pretrained models on large datasets are fine-tuned for specific image classification tasks with minimal data.Its efficiency and performance have made EfficientNet models, including B4, popular choices in various computer vision applications.It's crucial to select the appropriate model size, such as B4, based on the specific task's computational and accuracy requirements.Effi-cientNetB4 showcases an innovative approach to creating efficient yet high-performing convolutional neural networks, making it a valuable option for image classification and transfer learning. InceptionV3 InceptionV3 [51] is a CNN model widely used for image recognition tasks.It achieves high accuracy and features numerous convolutional, pooling, and activation layers.The architecture incorporates inception modules, enabling the network to learn distinct feature maps at different scales.Batch normalization and factorized 1x1 convolutions are used to reduce parameters and improve training efficiency.While versatile for various tasks and datasets, it can be computationally intensive and memory-consuming. Xception Xception [52], short for "extreme inception," is a deep CNN architecture proposed by Franc ¸ois Chollet in 2017.It extends Inception's ideas by using depthwise separable convolutions, which are more efficient.Depthwise separable convolutions consist of depthwise and pointwise convolutions, reducing computational complexity.Xception is known for its deep architecture, which enables it to learn complex features, and it excels in image classification accuracy. MobileNet MobileNet [53] is designed for embedded devices with limited processing capabilities.It balances accuracy and model size efficiently.The key innovation is the use of depthwise separable convolutions, which divide convolutions into depthwise and pointwise stages, significantly reducing computational costs and model size.This division drastically reduces computational demands and model size while maintaining reasonable accuracy levels.MobileNet's efficiency is based on the separation of spatial convolutions (depthwise convolutions) from feature mixing (pointwise convolutions).This modular design allows MobileNet to efficiently learn and process information across layers while drastically reducing computing burden when compared to standard convolutional networks.Notably, MobileNet has progressed through several versions, including MobileNetV1, V2, and V3, with each iteration bringing improvements in speed and efficiency.These versions have optimised the design, utilising advances in deep learning techniques to expand its capabilities.MobileNet is widely used in mobile and embedded systems, playing an important role in tasks such as object identification, picture classification, semantic segmentation, and other computer vision-related applications.Its flexibility to resource-constrained devices, as well as its ability to retain competitive accuracy, makes it an ideal candidate for scenarios that need efficient yet strong neural network designs.MobileNet's use of depthwise separable convolutions, together with its growth through many iterations, highlights its importance in providing efficient and accurate neural network processing for mobile and embedded systems. Evaluation parameters This study utilizes multiple evaluation criteria, such as accuracy, F1 score, recall, and precision, to gauge the effectiveness of transfer learning models.Furthermore, the research makes use of confusion matrices to assess the performance of these algorithms.A confusion matrix, also known as an error matrix, is a tabular representation commonly used to illustrate the classifier's performance on test data, offering a visual representation of algorithm performance. A "True positive (TP)" refers to instances in which the model made an accurate prediction for the positive class, while "True negative (TN)" signifies cases where the model correctly predicted the negative class.Conversely, "False positive (FP)" corresponds to situations where the model made an incorrect prediction for the positive class when the actual class was negative.Likewise, "False negative (FN)" denotes instances where the model inaccurately predicted the negative class when the true class was positive. The model's overall prediction accuracy is determined by evaluating the ratio of correct predictions to the entire dataset's total instances.This accuracy metric can be computed through the following formula: Precision serves as a metric that gauges the proportion of positive instances that were accurately predicted out of all the instances that the model identified as positive.Its central goal is to reduce false positives, providing insight into the model's capacity for correctly identifying positive cases.Precision is determined through the following formula: Recall, which is also referred to as the true positive rate or sensitivity, evaluates the proportion of positive instances that were correctly predicted with the total number of actual positive instances within the dataset.It quantifies the model's effectiveness in accurately capturing positive cases.Recall is computed using the following formula: The F1 score represents the harmonic average of precision and recall, offering a well-balanced assessment of the model's comprehensive performance by simultaneously accounting for both precision and recall.Its computation involves the following formula: Results and discussion The open-source TensorFlow and Keras libraries were used in this work to create the pretrained models.The Python programming language was used in conjunction with the Anaconda platform to analyse traffic accident severity using transfer learning algorithms.A Dell Poweredge T430 server with a GPU was used to handle the dataset's computing demands.This server has eight cores, sixteen logical processors, and 32GB of RAM.The paper proposes using transfer learning techniques to handle the challenge of predicting traffic accidents.Various scientific approaches will be used to assess the efficacy and importance of the suggested methodology.These findings are invaluable for researchers and practitioners seeking to leverage transfer learning for traffic accident severity detection, with MobileNet emerging as a particularly promising candidate for further exploration and deployment in real-world applications.The class-wise accuracy of all models is shown in Table 5. SHAP explanation SHAP (SHapley Additive exPlanations) [54] is a popular technique used for explaining the predictions of machine learning models, including black-box models like deep neural networks used in transfer learning.The primary version of SHAP that is commonly used for explaining black-box models is called "Kernel SHAP."Due to its powerful processing and visualisation capabilities, researchers have been using it more frequently to study road safety [55,56]. In this study, to uncover the significance of features in the black-box transfer learning model, Shapley values for each feature are calculated using the Python Shap library.SHAP emphasises the significance of characteristics in forecasting water quality.While the relevance of the SHAP feature outweighs that of traditional approaches, it only provides limited extra insights when used alone. The SHAP plot arranges features in descending order, indicating their importance, with high importance shown in red (towards the top) and low importance in blue (towards the bottom) along the Y-axis.The X-axis represents the impact of these features on the model output.Each point on the SHAP plot corresponds to a data point from the training dataset.When the X-axis value is to the left of 0, it indicates an observation that shifts the target value in a negative direction, while a value to the right of 0 shifts it in a positive direction.As depicted in Fig 2, the road category stands out as the most influential factor in the model's performance.In particular, higher road category values like 'Vehicle track' and 'Motorway' on the left side effect the model, which aligns with the fact that most accidents occur in rural and urban areas.In Fig 2, it is evident that drug consumption leads to more severe accidents.The Shapley value associated with the drug-related feature increases in tandem with drug consumption.In other words, as drug consumption levels rise, so do the probabilities of accidents and greater injury severity. Validation and generalization of the proposed approach In addition to conducting validation using an alternative dataset, this study aimed to demonstrate the effectiveness of the proposed approach by conducting experiments on a distinct dataset known as "US accidents (2016-2021)" dataset [57] is a comprehensive repository of almost 2.8 million records recording traffic incidents that happened across 46 states in the United States between February 2016 and December 2021.This dataset provides extensive geographic coverage, including several locations across the country, as well as a five-year time range, making it useful for analysing regional and seasonal differences in accident patterns.The dataset, which has 47 variables, contains information on the causes that contributed to automobile accidents, including details on accident locations, timings, weather conditions, road conditions, and accident severity.Table 6 proving its effectiveness. Time complexity in learning models is primarily concerned with the training phase, where it measures the time required to modify the model's parameters based on the input data.This complexity is determined by several factors, including the model's architectural complexity, the size of the training dataset, and the optimization approach used.Table 7 compares the training and testing times for the models utilized.Notably, the MobileNet model's computational time is surprisingly efficient, lasting just 200 seconds, significantly lower than the training timeframes of other transfer learning models used in this work.Impressively, this increased efficiency in training time does not compromise the model's accuracy, as it consistently outperforms individual models in terms of predictive accuracy. To further assess the effectiveness of the proposed method, the K-fold cross-validation is incorporated as an additional step for performance evaluation.The results from the 5-fold cross-validation are presented in Table 8.These results demonstrate the superior performance of the proposed technique in terms of precision, F1 score, accuracy, and recall when compared to alternative models.Prominently, the low standard deviation values indicate consistent and stable performance across different folds, reinforcing the confidence in the trustworthiness and reliability of MobileNet. Discussion The study's findings not only highlight the improved accuracy in predicting accident severity but also offer crucial insights into the significance of feature importance analysis, particularly concerning policy formulation and safety interventions.Feature importance analysis, exemplified through SHAP values, holds importance for stakeholders involved in transportation safety, policy-making, and law enforcement.By identifying influential features in accident severity prediction models, policymakers gain an understanding of the factors contributing most significantly to accidents.This understanding is pivotal for devising targeted interventions and formulating evidence-based policies to mitigate accident severity and frequency. The interpretability facilitated by SHAP values enables transportation planners to prioritize interventions based on the most impactful features.For instance, if environmental factors or specific vehicle types consistently emerge as influential, policymakers can direct resources and interventions toward improving road infrastructure, enhancing vehicle safety standards, or implementing targeted awareness campaigns. Moreover, the insights derived from feature importance analysis empower law enforcement organizations to optimize their strategies for traffic management and accident prevention.Identifying the key factors affecting severity aids in allocating resources efficiently, deploying enforcement measures where they are most needed, and devising preventive measures tailored to address the root causes of accidents.Additionally, the study's focus on MobileNet and the identification of influential features contribute directly to the development of more focused and effective actions aimed at reducing accidents.MobileNet's superior predictive accuracy, coupled with the understanding of influential features, presents an opportunity to devise proactive safety measures. The broader implications extend beyond road safety.Accurate accident severity prediction aids in optimizing emergency response, insurance risk assessment, traffic management, and fleet safety.Moreover, its relevance in the realms of autonomous vehicles, public health research, urban planning, and smart city initiatives underscores its multifaceted significance.Furthermore, the application of feature importance analysis is not confined solely to road accidents.Its adaptability extends to aviation, maritime, and industrial safety, thereby enhancing safety measures across various domains and contributing to enhanced decision-making and accident prevention, ultimately saving lives and resources. The feature importance analysis carried out in this study plays a vital role in informed policy decisions, targeted interventions, and overarching safety improvements, resonating across diverse sectors and aligning with the broader objective of ensuring safer transportation systems worldwide.Comparison with state-of-the-art.Two studies are selected for comparison purposes.[27] applied the DNN model to detect injury severity and applied the layer-wise relevance propagation (LRP) method to explain the prediction outcomes.They used the Chinese traffic accident dataset in experiments.While LRP is a widely used technique, it has limitations such as sensitivity to hyperparameters and the choice of rules, and its explanations might lack the consistency provided by Shapley values.LRP primarily focuses on local explanations, attributing relevance to features for a specific input instance.The interpretation of LRP results can be more complex, and the choice of specific rules in LRP can impact the explanations. While, [29] applied models (DT, NB, MLP, SVM, NN and ANN-MLP) and achieved 76.90% of accuracy on NHTSA-USA dataset.To extract significant features, they applied the Shapely decision plot.The use of the Shapely decision plot technique is commendable, but it is essential to note that different Shapley-based methods can vary in their interpretability and robustness.Shapely Decision Plots are sensitive to the sampling of instances used to compute Shapley values.If the sampled instances do not adequately represent the diversity of the dataset, the decision plot may not accurately reflect the true distribution of Shapley values.The NHTSA-USA dataset, while valuable for studying traffic accidents, has limitations.These include underreporting, inconsistent reporting, missing data, geographical and temporal biases, limited context, privacy concerns, data incompleteness, data imbalances, data collection bias, and changing data standards.The proposed approach leverages transfer learning models on a US traffic dataset, enhancing generalizability by utilizing pre-trained models for improved performance and robustness.The introduction of the Shapely Beeswarm plot, based on Shapley values and cooperative game theory, is a noteworthy innovation, providing a theoretically sound and visually interpretable framework for explaining feature contributions across different predictions.Shapley values contribute to high interpretability and versatility, facilitating a comprehensive understanding of complex models and insights into the decision-making process.The demonstrated robustness, with an accuracy of 98.17%, indicates not only accurate predictions but also meaningful insights into the influential factors driving those predictions. Conclusion Traffic accidents continue to pose a significant threat, resulting in loss of lives, injuries, and substantial disruptions on the roadways.Understanding the underlying factors that lead to these accidents is imperative for improving safety across transportation networks.The study leverages various transfer learning techniques and explains the most influential factors through the application of Shapley values.The research explored the prediction of accident severity using the models including Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Residual Networks (ResNet), EfficientNetB4, InceptionV3, Extreme Inception (Xception), and MobileNet.Among these models, the Mobi-leNet stood out with the highest accuracy of 98.17%. This knowledge provides a foundation for developing more effective measures to prevent accidents.In doing so, this research improves the accuracy of severity prediction and promotes the transparency, interpretability, and trustworthiness of learning models.This is essential for stakeholders and decision-makers seeking to take evidence-based actions to enhance road safety.Ultimately, the study's emphasis on transparency, interpretability, and the role of critical features serves as a cornerstone for informed decision-making in road safety measures, paving the way for a substantial reduction in the repercussions of traffic accidents on our highways. Fig 2 . Fig 2. Assessing the impact of features on the performance of MobileNet using SHAP.https://doi.org/10.1371/journal.pone.0300640.g002 Fig 3 provides the comparison of the transfer learning models and clearly shows the superiority of the proposed MobileNet. Table 2 . [49]set description.ResNetResidual Networks, or ResNets[49], are a strong and innovative kind of deep neural network design that has had a significant influence on the disciplines of computer vision and deep learning since their inception in 2015 by Kaiming He et al.They were developed to overcome the difficulty of training extremely deep neural networks by addressing the vanishing gradient problem, which commonly impedes deep network training.The "residual block," which comprises two key routes, is the essential innovation of ResNets.The identity path reflects the original input and transfers it directly to the output, while the residual path applies a sequence of convolutional layers and non-linear activations to the input.Skip connections, also known as shortcut connections, allow gradients to flow more easily during training, allowing for the training of extremely deep networks with hundreds or thousands of layers without performance deterioration.ResNets have excelled in a variety of picture classification tasks, most notably in the ImageNet Large Scale Visual Recognition Challenge.Because of their effectiveness in lowering error rates, they are a popular choice for many image-related applications.ResNets are also widely employed in transfer learning, in which pre-trained ResNet models are fine-tuned for specific image recognition tasks with little data.Over the years, several adaptations and improvements to the original ResNet design, such as ResNetV2 and Wide ResNets, have been produced, significantly enhancing both performance and efficiency.ResNets, created for image classification, have found uses in other fields such as natural language processing and speech recognition.Residual Networks have had a tremendous influence on the area of deep learning, becoming a standard design for numerous computer vision applications, allowing the training of extraordinarily deep networks while preserving good accuracy and generalization capabilities. Table 3 . Results of deep learning models for traffic accident severity detection. Deep learning model LSTM is the lowest performer model with 81.27% of accuracy score.The CNN model exhibited the most impressive results among other deep-learning models for predicting traffic accident severity. https://doi.org/10.1371/journal.pone.0300640.t003score. Table 4 provides a detailed analysis of the performance of various transfer learning models employed in the context of traffic accident severity detection.It represents the performance of the different transfer learning models including ResNET, EfficientNetB4, InceptionV3, Xception, and MobileNet.The results demonstrate that MobileNet stands out as the top performer among the models, attaining the highest accuracy at 98.17%, alongside 98.34% precision, 98.91% recall, and 98.48% F1 score.In contrast, the Xception model ranks lower in terms of precision (82.63%) and F1 score (85.19%), indicating areas where it may require improvement.InceptionV3 and EfficicentNetB4 have achieved 92.48% and 93.67% accuracy scores respectively.However, ResNET has shown the second highest results with 95.27% accuracy, 96.25% precision, 98.19% recall and 97.67% F1-score.
7,114.8
2024-04-09T00:00:00.000
[ "Engineering", "Computer Science" ]
Multiplicity and rapidity dependence of K ∗ ( 892 ) 0 and φ ( 1020 ) production in p–Pb collisions at √ s NN = 5.02 TeV The transverse-momentum ( p T ) spectra of K ∗ ( 892 ) 0 and φ ( 1020 ) measured with the ALICE detector up to p T = 16 GeV/ c in the rapidity range − 1.2 < y < 0.3, in p–Pb collisions at the center-of-mass energy per nucleon–nucleon collision √ s NN = 5 . 02 TeV are presented as a function of charged particle multiplicity and rapidity. The measured p T distributions show a dependence on both multiplicity and rapidity at low p T whereas no significant dependence is observed at high p T . A rapidity dependence is observed in the p T -integrated yield (d N /d y ), whereas the mean transverse momentum ( h p T i ) shows a flat behavior as a function of rapidity. The rapidity asymmetry ( Y asym ) at low p T ( < 5 GeV/ c ) is more significant for higher multiplicity classes. At high p T , no significant rapidity asymmetry is observed in any of the multiplicity classes. Both K ∗ ( 892 ) 0 and φ ( 1020 ) show similar Y asym . The nuclear modification factor ( Q CP ) as a function of p T shows a Cronin-like enhancement at intermediate p T , which is more prominent at higher rapidities (Pb-going direction) and in higher multiplicity classes. At high p T ( > 5 GeV/ c ), the Q CP values are greater than unity and no significant Introduction The primary goals of high-energy heavy-ion (A-A) collisions are to create a system of deconfined quarks and gluons known as quark-gluon plasma (QGP) and to study its properties [1][2][3][4].Asymmetric collision systems like proton-nucleus (p-A) and deuteron-nucleus (d-A) can be considered as control experiments where the formation of an extended QGP phase is not expected.These collision systems are used as baseline measurements to study the possible effects of cold nuclear matter and disentangle the same from hot dense matter effects produced in heavy-ion collisions [5][6][7][8][9][10][11][12][13][14].In addition, p-A collisions at Large Hadron Collider (LHC) energies enable probing the parton distribution functions in nuclei at very small values of the Bjorken x variable, where gluon saturation effects may occur [15][16][17].Recent measurements in high-multiplicity pp, p-Pb, p-Au, d-Au, and 3 He-Au collisions at different energies have shown features such as anisotropies in particle emission azimuthal angles, strangeness enhancement, and long-range structures in two-particle angular correlations on the near and away side, which previously have been observed in nucleus-nucleus collisions [18][19][20][21][22][23][24][25][26][27][28][29].The origin of these phenomena in small systems is not yet fully understood.A systematic study of multiplicity and rapidity dependence of hadron production allows us to investigate the mechanism of particle production and shed light on the physics processes that contribute to the particle production [15].Similar studies have been reported by the experiments at the LHC [9, 10, 16,17] and Relativistic Heavy Ion Collider (RHIC) [6,7,[19][20][21].The mechanism of hadron production may be influenced by different effects such as nuclear modification of the parton distribution functions (nuclear shadowing) and possible parton saturation, multiple scattering, and radial flow [16,[30][31][32].These effects are expected to depend on the rapidity of the produced particles.In p-Pb collisions, one can expect that the production mechanism may be sensitive to different effects at forward (p-going) and backward (Pb-going) rapidities [9,10,16,17,32,33].The partons of the incident proton are expected to undergo multiple scattering while traversing the Pb-nucleus.It is thus interesting to study the ratio of particle yields between Pb-and p-going directions, represented by the rapidity asymmetry (Y asym ) defined as: dp T dy 0<y<0.3(1) where d 2 N/dp T dy| −0.3<y<0 is the particle yield in the rapidity (y) interval -0.3 < y < 0, considered as the Pb-going direction, and d 2 N/dp T dy| 0<y<0.3 is the particle yield in the rapidity interval 0 < y < 0.3, corresponding to the p-going direction.From the experimental point of view, the Y asym is a powerful observable because systematic uncertainties cancel out in the ratio and hence it can better discriminate rapiditydependent effects among models [16][17][18].Gluon saturation effects at low Bjorken x values [7,18] may affect the transverse momentum distribution of hadron production at large rapidities in the p-going direction in p-Pb collisions at LHC energies.The gluon saturation effects depend on the colliding nuclei and rapidity as A 1/3 e λ y , where A represents the mass number [18], and λ is a parameter whose value lies between 0.2 and 0.3, and is obtained from fits to the HERA measurements [8].The effect of rapidity dependence on particle production is tested by measuring the ratios of integrated yield (dN/dy) and mean transverse momentum ( p T ) at given y to the values at y = 0, i.e. denoted as (dN/dy)/(dN/dy) y=0 and p T / p T y=0 .It is also important to study the variation with rapidity of the nuclear modification factor between central and non-central collisions.This factor (Q CP (p T )) is defined as where N coll is the average number of nucleon-nucleon collisions in low-multiplicity (LM) and highmultiplicity (HM) events, respectively.The multiplicity dependence of K * 0 and φ meson production at Data analysis Measurements of K * 0 and φ meson production are carried out on the data sample collected in 2016 during the second LHC run with p-Pb collisions at √ s NN = 5.02 TeV.The resonances are reconstructed from their decay products by using the invariant-mass method.The considered decay channels are K * 0 → K + π − and its charge conjugate, and φ → K + K − with respective branching ratios (BR) of 66.6 % and 49.2 % [26,27].In the p-Pb configuration, the 208 Pb beam with energy of 1.58 TeV per nucleon collides with a proton beam with an energy of 4 TeV resulting in collisions at a nucleon-nucleon centerof-mass energy √ s NN = 5.02 TeV [26].It leads to the rapidity in the center-of-mass frame being shifted by ∆y = −0.465 in the direction of the proton beam with respect to the laboratory frame.The measurements are performed in the rapidity range -1.2 < y < 0.3 for five rapidity intervals with width of 0.3 units and three multiplicity classes along with a multiplicity-integrated class.The details of the ALICE detector setup and its performance can be found in Refs.[44,45].The measurements are carried out with the ALICE central barrel detectors, which are utilized for tracking, PID, and primary vertex reconstruction and are housed inside a solenoidal magnet with a magnetic field of 0.5 T. The main detectors that are used for the analyses presented here are the Inner Tracking System (ITS) [46], the Time Projection Chamber (TPC) [47], and the TOF (Time-Of-Flight) [48] detectors.These detectors have full azimuthal coverage and have a common pseudorapidity coverage of |η| < 0.9. Event and track selection and particle identification The trigger and event selection criteria are the same as those discussed in previous publications [26,27].The events are selected with a minimum-bias trigger based on the coincidence of signals in two arrays of 32 scintillator detectors covering full azimuth and the pseudorapidity regions 2.8 < η < 5.1 (V0A) and -3.7 < η < -1.7 (V0C) [49].The primary vertex of the collision is determined using the charged tracks reconstructed in the ITS and the TPC.Events are selected whose reconstructed primary vertex position lies within ±10 cm from the center of the detector along the beam direction.The Silicon Pixel Detector (SPD) which is the innermost detector of the ITS, is used to reject events in which multiple collision vertices are found (pile-up) [46].In this work, approximately 540 million events are selected with the criteria described above.The minimum-bias events are further divided into three multiplicity classes, which are expressed in percentiles according to the total charge deposited in the V0A detector [49].The yield of K * 0 and φ mesons is measured in five rapidity regions -1.2 < y < -0.9, -0.9 < y < -0.6, -0.6 < y < -0.3, -0.3 < y < 0 and 0 < y < 0.3 for the multiplicity classes 0-10%, 10-40%, 40-100% in addition to the multiplicity-integrated (0-100%) measurement, corresponding to all minimum-bias events.The 10% of the events with the highest multiplicity of charged particles correspond to the 0-10% class and similarly, the 40-100% class corresponds to the lowest multiplicity.The N coll values are estimated from a Glauber model analysis [50] of the charged particle multiplicity distribution in the V0A detector, and they are 13.8 ± 3.8, 10.5 ± 3.9 and 4.0 ± 2.6, respectively for 0-10%, 10-40%, and 40-100% multiplicity classes taken from Ref. [51].Charged-particle tracks reconstructed in the TPC with p T > 0.15 GeV/c and pseudorapidity |η| < 0.8 are selected for the analysis.The selected charged tracks should have crossed at least 70 out of 159 readout-pad rows of the TPC.The distance of closest approach of the track to the primary vertex in the longitudinal direction (DCA z ) is required to be less than 2 cm.In the transverse plane (xy) a p T -dependent selection of DCA xy (p T ) < 0.0105 + 0.035 p −1.1 T cm is applied.The K * 0 and φ mesons are reconstructed from their decay daughters (pions and kaons), which are identified by measuring the specific ionization energy loss (dE/dx) in the TPC [47] and their time-of-flight information using the TOF [48].For the selection of pions and kaons, the measured dE/dx is required to be within nσ TPC from the expected dE/dx values for a given mass hypothesis, where σ TPC is the TPC dE/dx resolution.The values of n are momentum-dependent (p) and are set to 6σ TPC , 3σ TPC , and 2σ TPC in the momentum intervals p < 0.3 GeV/c, 0.3 < p < 0.5 GeV/c and p > 0.5 GeV/c, respectively.If the TOF information is available for the considered tracks, it is used for pion and kaon identification in addition to the TPC one by requiring the time-of-flight of the particle to be within 3σ TOF from the expected value for the considered mass hypothesis, where σ TOF is the time-of-flight resolution of the TOF. Yield extraction The K * 0 and φ resonances are reconstructed from their decay products using the invariant-mass reconstruction technique described in Refs.[26,27].The invariant-mass distributions of K ± π ∓ and K + K − pairs in the same event are reconstructed.The shape of uncorrelated background is estimated using two techniques, namely mixed-event and like-sign methods.In the mixed-event method, the shape of the uncorrelated-background distribution for K * 0 (φ ) is obtained by combining pions (kaons) from a given event with opposite-sign kaons from other events.Each event is mixed with five different events to reduce the statistical uncertainties of the estimated uncorrelated-background distribution.The events which are mixed are required to have similar characteristics, i.e. the longitudinal position of the primary vertices should differ by less than 1 cm, and the multiplicity percentiles, computed from the V0A amplitude, should differ by less than 5%.The mixed-event distributions for K * 0 (φ ) candidates are normalized in the invariant mass interval 1.1 < M Kπ < 1.15 GeV/c 2 (1.06 < M KK < 1.09 GeV/c 2 ), which is well separated from signal peak.Figure 1: Invariant-mass distributions after combinatorial background subtraction for K * 0 and φ candidates in the multiplicity class 0-10% and transverse momentum range 2.2 ≤ p T < 3.0 GeV/c in the rapidity interval -0.3 < y < 0 (panels (a) and (b)) and 0 < y < 0.3 (panels (c) and (d)).The K * 0 peak is described by a Breit-Wigner function whereas the φ peak is fitted with a Voigtian function.The residual background is described by a polynomial function of order 2. In the like-sign method, tracks with the same charge from the same event are paired to estimate the uncorrelated background contribution.The invariant-mass distribution for the uncorrelated background is obtained as the geometric mean 2 √ n ++ × n −− , where n ++ and n −− are the number of positive-positive and negative-negative pairs in each invariant-mass interval, respectively.The mixed-event technique is used as the default method to extract the yields for both K * 0 and φ mesons, while the difference with √ s NN = 5.02 TeV ALICE Collaboration respect to the yield obtained using the combinatorial background from the like-sign method is included in the estimation of the systematic uncertainty.After the subtraction of the combinatorial background, the invariant-mass distribution consists of a resonance peak sitting on the top of a residual background of correlated pairs.The residual background originates from correlated pairs from jets, misidentification of pions and kaons from K * 0 and φ meson decays, and partially reconstructed decays of higher-mass particles [26].Figure 1 shows the K ± π ∓ and K + K − invariant-mass distributions after subtraction of mixed-event background in the transverse momentum interval 2.2 ≤ p T < 3.0 GeV/c for the rapidity intervals -0.3 < y < 0 (panels (a) and (b)) and 0 < y < 0.3 (panels (c) and (d)) in the 0-10% multiplicity class. The signal peak is fitted with a Breit-Wigner and a Voigtian function (convolution of Breit-Wigner and Gaussian functions) for K * 0 and φ resonances, respectively.For the K * 0 , a pure Breit-Wigner is used because the invariant mass resolution is negligible with respect to the natural width of the resonance peak. A second-order polynomial function is used to describe the shape of the residual background for both resonances.The fit to the invariant-mass distribution is performed in the interval 0.75 < M Kπ < 1.15 GeV/c 2 (0.99 < M KK < 1.07 GeV/c 2 ) for K * 0 (φ ).The widths of K * 0 and φ peaks are fixed to their known widths [52], whereas the resolution parameter of the Voigtian function for φ is kept as a free parameter.In the estimation of the systematic uncertainties, the width of the Breit-Wigner is taken as a free parameter.The mass and width values extracted from the fit have similar magnitude and trend with p T as reported in previous publications [26,27,34,53].In the present study, it is found that the mass and width obtained from the fit are independent of rapidity and multiplicity for both K * 0 and φ mesons.The sensitivity of the systematic uncertainty to the choice of the fitting range, normalization interval of the mixed-event background, shape of the residual background function, width, and resolution parameters have been studied by varying the fit configuration, as described in Section 2.3.The raw yields of K * 0 and φ mesons are extracted in the transverse momentum range from 0.8 to 16 GeV/c for various rapidity intervals and multiplicity classes. To obtain the transverse momentum spectra, the raw yields are normalized by the number of accepted non-single-diffractive (NSD) events and corrected for the branching ratio and the detector acceptance (A) times the reconstruction efficiency (ε rec ).The A × ε rec is obtained from a Monte Carlo (MC) sim- ulation based on the DPMJET [40] event generator and the GEANT3 package to model the transport of the generated particles through the ALICE detector [54].The A × ε rec is defined as the ratio of the reconstructed p T spectra of K * 0 (φ ) mesons in a given rapidity interval to the generated ones in the same rapidity interval.The track and PID selection criteria applied to the decay products of resonances in the MC are identical to those used in the data.Since the efficiency depends on p T and the p T distributions of K * 0 and φ mesons from DPMJET are different from the real data, a re-weighting procedure is applied to match the generated p T shapes to the measured ones. The effect of the re-weighting on A × ε rec depends on p T and amounts to ∼5-17% at p T < 1.5 GeV/c.At higher p T , the effect is negligible.The effect of re-weighting also depends on rapidity at low p T .The re-weighted A × ε rec is used to correct the raw p T distribution.The A × ε rec is calculated for each rapidity interval and multiplicity class considered in the analysis.The A × ε rec as a function of p T shows a rapidity dependence for a given multiplicity class, however, no significant multiplicity dependence of A × ε rec is observed for a given rapidity interval. Systematic uncertainties The procedure to estimate the systematic uncertainties is similar to the one adopted in previous analyses [26,27].The sources of systematic uncertainties on the measured yield of K * 0 and φ mesons are signal extraction, track selection criteria, PID, global tracking efficiency, uncertainty on the material budget of the ALICE detector, and the hadronic interaction cross section in the detector material.A summary of the systematic uncertainties on the p T spectra is given in Table 1.The uncertainty due to signal extraction is estimated from the variation of the yields when varying the invariant mass fit range, the treatment of the Breit-Wigner width in the fits, the mixed-event background normalization interval, the choice of residual background function, and the method to determine the combinatorial background.The fitting range is varied by 50 MeV/c 2 for K * 0 and 10 MeV/c 2 for φ .The normalization interval of the mixed-event background is varied by 150 (50) MeV/c 2 with respect to the default value for K * 0 (φ ).The width of K * 0 and φ resonances is left as a free parameter in the fit, instead of fixing it to the world-average value.For φ resonances, the effect on the yield due to the variation of resolution parameter (σ of the Gaussian function in the Voigtian distribution) is also considered.The residual background is parameterized using first-order and third-order polynomial functions for estimating its contributions to the systematic uncertainties.The combinatorial background from the like-sign method is used instead of the one from event mixing.The estimated systematic uncertainties due to the yield extraction is 5.2% for K * 0 and 3.3% for φ .The systematic effects due to charged track selection have been studied by varying the selection criteria on the number of crossed rows in the TPC, the ratio of the numbers of TPC crossed rows to findable clusters, and the DCA to the primary vertex of the collisions.The estimated uncertainties due to the track selection is 2.5% for K * 0 and about 5% for the φ mesons.To estimate the systematic uncertainty due to the PID, the selections on the dE/dx and time-of-flight of the pions and kaons are varied.Two momentum-independent selections: 2σ TPC with 3σ TOF and 2σ TPC only, are used for both K * 0 and φ .The estimated systematic uncertainties are 3% for K * 0 and 1.7% for φ .The uncertainty due to the global tracking efficiency, description of the detector material budget in the simulation, and the cross sections for hadronic interactions in the material are taken from Ref. [26].The total systematic uncertainty is taken as the quadratic sum of all contributions leading to 7.5% for K * 0 and 7.3% for φ mesons.No multiplicity and rapidity dependence of the systematic uncertainties is observed.Therefore, the systematic uncertainties on the p T spectra determined for minimum-bias events in the rapidity interval 0 < y < 0.3 are assigned in all rapidity intervals and multiplicity classes. The systematic uncertainties on Y asym are estimated by considering the same approaches and variations as for the corrected yields.The systematic uncertainties due to signal extraction and PID are uncorrelated among different rapidity intervals whereas the other sources of systematic uncertainties such as track selections, global tracking uncertainties, material budget and hadronic interactions are correlated and cancel out in the Y asym ratio.For the uncorrelated sources of uncertainty, the same variations considered for the yields were studied by estimating their effects on the Y asym ratio.The resulting uncertainty was estimated to be about 2.5% (2%) for K * 0 (φ ) mesons.No multiplicity and rapidity dependence of the uncertainties is observed for Y asym .Therefore, the systematic uncertainties determined for minimum-bias events are assigned to the ratios in the different rapidity intervals and multiplicity classes.The systematic uncertainties on the ratios (dN/dy)/(dN/dy) y=0 and p T / p T y=0 as a function of rapidity are calculated in a similar way as for Y asym .The systematic uncertainties on ( dN/dy)/(dN/dy) y=0 and p T / p T y=0 are 2.2% (2%) and 1.2% (1%) for K * 0 (φ ), respectively. Results and discussion The rapidity and multiplicity dependence results on the p T spectra, the dN/dy, the p T , the Y asym , and the Q CP in p-Pb collisions at √ s NN = 5.02 TeV are discussed.The measurements are also compared with various model predictions. Transverse momentum spectra Figure 2 and Fig. 3 show the p T spectra of K * 0 and φ mesons in p-Pb collisions at √ s NN = 5.02 TeV for five rapidity intervals within -1.2 < y < 0.3 and for two multiplicity classes 0-10% and 40-100%, respectively.The ratios of the p T spectra in different rapidity intervals to that in the interval 0 < y < 0.3 are presented in the bottom panels of Fig. 2 and Fig. 3.The measured p T spectra of K * 0 and φ mesons in the 0-10% multiplicity class show a rapidity dependence at low p T (< 5 GeV/c) indicating that the production of these resonances is higher in the Pb-going direction (y < 0) than in the p-going direction (y > 0).For high p T , no rapidity dependences are observed. Integrated particle yield and mean transverse momentum The dN/dy and the p T are obtained from the transverse momentum spectra in the measured p T interval and using a fit function to account for the contribution of K * 0 and φ mesons in unmeasured regions.The spectra are fitted with a Lévy-Tsallis function [55] and the fit function is extrapolated to unmeasured regions at low p T ( < 0.8 GeV/c).The integral of the fit function in the extrapolated region accounts for 33% (39%) of the total yield in the 0-10% (40-100%) multiplicity class for both K * 0 and φ mesons.The contribution of the extrapolated yield at low p T is the same for all rapidity intervals.The contribution of the yield in the unmeasured region at high p T ( > 16 GeV/c) is negligible for both K * 0 and φ mesons.The extrapolated yield contribution at low p T obtained with different fitting functions (i.e., m T -exponential, Bose-Einstein and Boltzmann-Gibbs Blast-Wave function [56]) and that obtained with the default Lévy-Tsallis function is 5% (8%) for the 0-10% (40-100%) included as the systematic uncertainties in the dN/dy and it varies by 2-5 % for the p T .In Fig. 4 the dN/dy (top panels) and p T (bottom panels) of K * 0 (left) and φ (right) mesons are shown as a function of y for minimum-bias p-Pb collisions at √ s NN = 5.02 TeV.The central values of the dN/dy of both K * 0 and φ mesons decrease slightly from the rapidity interval −1.2 < y < −0.9 to 0 < y < 0.3 even though within the systematic uncertainties all the data points are compatible among each other.Nevertheless, considering that the systematic uncertainties are mostly correlated among the rapidity intervals, the measured dN/dy values suggest a decreasing trend with increasing y in the rapidity interval covered by the measurement.The p T is constant as a function √ s NN = 5.02 TeV ALICE Collaboration of rapidity for both K * 0 and φ resonances.The predictions from EPOS-LHC [35], EPOS3 with and without UrQMD [38,39], DPM-JET [40], HIJING [41], and PYTHIA8/Angantyr [42] The model predictions from EPOS-LHC [35], EPOS3 with and without UrQMD [38,39], DPMJET [40], HIJING [41], and PYTHIA8/Angantyr [42] are also shown in the Fig. 4. In general, the models show a similar trend with rapidity as the data except EPOS3 with and without UrQMD for p T , which shows a pronounced decreasing trend with rapidity.All the model predictions shown in Fig. 4 underestimate the p T of both meson species.For the dN/dy, HIJING and EPOS3 with and without UrQMD overpredict the measured values for both K * 0 and φ , while PYTHIA8/Angantyr overpredicts the K * 0 and underpredicts the φ yield.EPOS-LHC provides the best overall description of the dN/dy and p T measurements for K * 0 and φ mesons.The p T also shows a flat behavior as a function of rapidity for all the considered multiplicity classes as it can be seen in A similar behavior in the average transverse kinetic energy as a function of rapidity for strange hadrons was reported in Ref. [16].The rapidity dependence of dN/dy and p T for K * 0 and φ mesons in the multiplicity class 0-100% is further studied by dividing the dN/dy and p T values in a given rapidity interval by the corresponding values at y = 0, as shown in Fig. 5.The dN/dy and p T value at y = 0 is computed from the p T spectrum measured in the rapidity interval -0.3< y <0.3.The systematic uncertainties on these ratios are estimated by studying the effects of the variations directly on the ratios as discussed in Section 2.3.This procedure takes into account the correlation of the systematic uncertainties across rapidity bins: as a result, these ratios have smaller systematic uncertainties than those on the dN/dy and p T , and allow for a better insight into the y dependence.The ratio (dN/dy)/(dN/dy) y=0 decreases with rapidity, whereas p T / p T y=0 shows a flat behavior within uncertainties as a function of rapidity for K * 0 and φ mesons.The measurements are compared with various model predictions.The predictions from HIJING qualitatively reproduce the trend and are the closest to the data for both K * 0 and φ .The predictions from PYTHIA8/Angantyr, DPMJET, EPOS-LHC, EPOS3 with and without UrQMD show a decreasing trend of (dN/dy)/(dN/dy) y=0 with increasing y, but the rapidity dependence is less pronounced than the one in data, as it can be seen by the fact that they all tend to underestimate the measured yield ratios in the lowest rapidity intervals, especially for K * 0 meson.For p T / p T y=0 as a function of y, also shown in Fig. 5, EPOS3 with and without UrQMD overestimate the measurements at low y and predict a marked decreasing trend of p T / p T y=0 with rapidity, which is not supported by the data.From the other models, less pronounced trends are expected, which are consistent with the data.In particular, HIJING predicts a slightly decreasing p T / p T y=0 with increasing rapidity, while PYTHIA8/Angantyr, DPMJET, and EPOS-LHC predict a slightly increasing trend. Similar studies of the ratio p T / p T y=0 of charged hadrons in p-Pb collisions at √ s NN = 5.02 TeV compared with the predictions of hydrodynamics and color-glass condensate (CGC) model were reported in [33].Predictions from hydrodynamic calculations show a decrease in p T with rapidity, whereas CGC predicts an increase in p T with rapidity [33], while the data are flat within uncertainties.The dN/dy and p T increase with multiplicity at midrapidity as observed for light-flavor hadrons and resonances in pp and p-Pb collisions [26,27,56].A similar behavior is observed in this article for K * 0 and φ in all the different rapidity intervals shown in The p T integrated yield (dN/dy) (upper panels) and mean transverse momentum ( p T ) (bottom panels) for K * 0 (left) and φ (right) mesons as a function of y, divided by the dN/dy and p T at y = 0 for the multiplicity class 0-100% in p-Pb collisions at √ s NN = 5.02 TeV.The predictions from EPOS-LHC [35], EPOS3 with and without UrQMD [38,39], DPMJET [40], HIJING [41], and PYTHIA8/Angantyr [42] are shown as different curves.The statistical uncertainties are represented as bars whereas the boxes indicate total systematic uncertainties. Rapidity asymmetry The rapidity asymmetry (Y asym ) is calculated from K * 0 and φ mesons yields in -0.3< y <0 and 0< y <0.3, as defined by Equation 1. Figure 6 shows the Y asym of K * 0 and φ mesons in the measured p T intervals for various multiplicity classes in p-Pb collisions at √ s NN = 5.02 TeV.The Y asym values for K * 0 and φ as a function of p T are consistent within uncertainties for all multiplicity classes.The Y asym values deviate from unity at low p T ( < 5 GeV/c), suggesting the presence of a rapidity dependence in the nuclear effects.The deviations are more significant for events with high multiplicity.The Y asym values are consistent with unity at high p T ( > 5 GeV/c) for all multiplicity classes, suggesting the absence of nuclear effects at high p T for the production of K * 0 and φ mesons in p-Pb collisions.Similar results have been reported for charged hadrons, pions, protons in d-Au collisions at √ s NN = 200 GeV by the STAR Collaboration [18] and for charged hadrons and multi-strange hadrons in p-Pb collisions at √ s NN = 5.02 TeV by the CMS Collaboration as discussed in Refs.[16,17].Figure 7 shows the comparison of the measured Y asym for K * 0 and φ mesons as a function of p T in minimum-bias events (0-100%) with the model predictions from EPOS-LHC, HIJING with and without shadowing, DPMJET, PYTHIA8/Angantyr, and EPOS3 with and without UrQMD.[35], EPOS3 with and without UrQMD [38,39], DPMJET [40], HIJING [41], and PYTHIA8/Angantyr [42] HIJING with and without shadowing, and EPOS3 with and without UrQMD describe the measured Y asym at low p T within uncertainties, but they significantly overestimate the data at high p T , predicting an increasing trend with p T (more pronounced for K * 0 than for φ ) that is not supported by the measurements, which are consistent with a flat or decreasing trend for both meson species.Model predictions from EPOS-LHC, PYTHIA8/Angantyr, and DPMJET for K * 0 and DPMJET for φ at high p T are in agreement with the data within uncertainties. Nuclear modification factor The nuclear modification factor Q CP is calculated from the K * 0 and φ yields normalized to N coll in high multiplicity (central) and low multiplicity (peripheral) collisions, as defined by Equation 2. Figure 8 shows the Q CP of K * 0 (red circles) and φ (blue squares) mesons as a function of p T for 0-10% / 40-100% (top panels) and 10-40% / 40-100% (bottom panels) in various rapidity intervals within the range −1.2 < y < 0.3 for p-Pb collisions at √ s NN = 5.02 TeV.The Q CP of φ mesons seems to be slightly higher than the K * 0 one for the ratio of 0-10% / 40-100%, however, the results for the two meson species are consistent within uncertainties for the ratio 10-40% / 40-100% for all measured rapidity intervals.An enhancement at intermediate p T (2.2 < p T < 5.0 GeV/c), reminiscent of the Cronin effect, is seen for K * 0 and φ mesons in the Q CP .This enhancement is more pronounced at high negative rapidity, i.e., in the Pb-going direction, and for high multiplicity events.The more pronounced Cronin-like enhancement for the 0-10% multiplicity class suggests that multiple scattering effects are more relevant for high multiplicity (central) collisions.At high p T (> 5 GeV/c), the Q CP values are greater than unity, which is a known feature of Q pPb 1 and Q CP when the centrality or multiplicity classes are defined with the V0 detector, and it is interpreted as a selection bias due to the multiplicity estimator [51].The results for K * 0 and φ mesons are consistent between each other within uncertainties.To quantify the rapidity dependence of the nuclear modification factor, the Q CP values of K * 0 and φ mesons for intermediate p T (2.2 < p T < 5.0 GeV/c) are shown as a function of rapidity in Fig. 9.The values of Q CP at intermediate p T show a faster decrease from the rapidity interval −1.2 < y < − 0.9 to 0 < y < 0.3 for 0-10% / 40-100% than for 10-40% / 40-100%, indicating a stronger rapidity dependence of the Cronin-like enhancement in events with high multiplicity.The stronger rapidity dependence for 0-10% / 40-100% can be inferred from the slope parameter (α) of the linear function fit to the Q CP of K * 0 and φ mesons reported in Fig. 9.The slope of the φ meson Q CP is slightly larger than the K * 0 one.A similar conclusion on the η dependence of nuclear modification factors of charged hadrons was reported by the BRAHMS Summary The transverse momentum differential yields of K * 0 and φ mesons have been measured in the rapidity interval −1.2 < y < 0.3 for various multiplicity classes over the transverse momentum range 0.8 < p T <16 GeV/c in p-Pb collisions at √ s NN = 5.02 TeV with the ALICE detector.The p T spectra of K * 0 and φ mesons show a multiplicity and rapidity dependence at low p T , whereas the spectral shapes are similar for all multiplicity classes and rapidity intervals at high p T (> 5 GeV/c).This suggests that nuclear effects influence K * 0 and φ meson production at low p T .The (dN/dy)/(dN/dy) y=0 ratios decreases with increasing rapidity in the measured interval -1.2 < y <0.3, whereas the average transverse momentum ( p T ) and the p T / p T y=0 ratios show a flat behavior for both K * 0 and φ mesons.The rapidity dependence of dN/dy, p T and their ratios with respect to the corresponding values at y = 0 are compared with model predictions for minimum-bias events.The EPOS-LHC model, which includes parameterized flow, provides the best description for the magnitudes of K * 0 and φ dN/dy and p T , whereas HIJING predictions are in closest agreement with the measured rapidity dependence, which is studied via the ratios (dN/dy)/(dN/dy) y=0 and p T / p T y=0 .The Y asym ratios for K * 0 and φ mesons as a function of p T show deviations from unity at low p T for high multiplicity events, while, their values are consistent with unity within uncertainties at high p T in the measured multiplicity and rapidity intervals.The Y asym ratios of K * 0 and φ mesons are found to be consistent between each other within uncertainties in the measured kinematic region.The measured deviations of Y asym from unity at low p T suggest the presence of rapidity dependent nuclear effects such as multiple scattering, nuclear shadowing, parton saturation, and energy loss in cold nuclear matter.None of the models presented here is able to describe the Y asym of K * 0 and φ mesons at low p T .The nuclear modification factors between the central and peripheral collisions Q CP for K * 0 and φ mesons as a function of p T show a bump, with a maximum around p T =3 GeV/c, suggestive of the Cronin effect.This Cronin-like enhancement is more pronounced for large negative rapidities (in the Pb-going direction) and for more central (higher multiplicity) collisions.The measurements reported in this paper confirm that nuclear effects play an important role in particle production in p-Pb collisions at the LHC energies.They will contribute, along with previous and upcoming measurements of other hadron species, to constrain models and event generators. arXiv:nucl-ex/0306021.The dN/dy and the p T increase with multiplicity for a given rapidity interval.The dN/dy shows a weak rapidity dependence with large uncertainties, and suggesting a more pronounced dependence for events in the highest multiplicity class (0-10%).The p T shows a flat behavior as a function of rapidity for all multiplicity classes in the measured rapidity interval.Similar behavior in the average transverse kinetic energy as a function of rapidity for strange hadrons was reported in Ref. [16]. Figure 2 : Figure 2: Top panels: The transverse momentum spectra of K * 0 for five rapidity intervals within −1.2 < y < 0.3 and for two multiplicity classes (0-10%, 40-100%) in p-Pb collisions at √ s NN = 5.02 TeV.The data for different rapidity intervals are scaled for better visibility.Bottom panels: The ratios of the p T spectra in various rapidity intervals to that in the interval 0 < y < 0.3 for a given multiplicity class.The statistical and systematic uncertainties are shown as bars and boxes around the data points, respectively. Figure 3 : Figure 3: Top panels: The transverse momentum spectra of φ for five rapidity intervals within −1.2 < y < 0.3 and for two multiplicity classes (0-10%, 40-100%) in p-Pb collisions at √ s NN = 5.02 TeV.The data for different rapidity intervals are scaled for better visibility.Bottom panels: The ratios of the p T spectra in various rapidity intervals to that in the interval 0 < y < 0.3 for a given multiplicity class.The statistical and systematic uncertainties are shown as bars and boxes around the data points, respectively. Fig. A.1 of Appendix A. Fig.A.1 in Appendix A. Figure 5 : Figure5: The p T integrated yield (dN/dy) (upper panels) and mean transverse momentum ( p T ) (bottom panels) for K * 0 (left) and φ (right) mesons as a function of y, divided by the dN/dy and p T at y = 0 for the multiplicity class 0-100% in p-Pb collisions at √ s NN = 5.02 TeV.The predictions from EPOS-LHC[35], EPOS3 with and without UrQMD[38,39], DPMJET[40], HIJING[41], and PYTHIA8/Angantyr[42] are shown as different curves.The statistical uncertainties are represented as bars whereas the boxes indicate total systematic uncertainties. Figure 6 : Figure 6: Rapidity asymmetry (Y asym ) of K * 0 (red circles) and φ (blue squares) meson production as a function of p T in the rapidity range 0 < |y| < 0.3 for various multiplicity classes in p-Pb collisions at √ s NN = 5.02 TeV.The statistical uncertainties are shown as bars whereas the boxes represent the systematic uncertainties on the measurements. Figure 7 : Figure7: The comparison of experimental results of Y asym for K * 0 and φ meson production as a function of p T in the rapidity range 0 < |y| < 0.3 with the model predictions from EPOS-LHC[35], EPOS3 with and without UrQMD[38,39], DPMJET[40], HIJING[41], and PYTHIA8/Angantyr[42]. Data points are shown with blue markers, and model predictions are shown by different color bands, where bands represent the statistical uncertainity of the model.The statistical uncertainties on the data points are represented as bars whereas the boxes indicate total systematic uncertainties. Figure7: The comparison of experimental results of Y asym for K * 0 and φ meson production as a function of p T in the rapidity range 0 < |y| < 0.3 with the model predictions from EPOS-LHC[35], EPOS3 with and without UrQMD[38,39], DPMJET[40], HIJING[41], and PYTHIA8/Angantyr[42]. Data points are shown with blue markers, and model predictions are shown by different color bands, where bands represent the statistical uncertainity of the model.The statistical uncertainties on the data points are represented as bars whereas the boxes indicate total systematic uncertainties. Figure 9 : Figure 9: The Q CP of K * 0 (red circles) and φ (blue squares) mesons as a function of rapidity for 0-10% / 40-100% (solid markers) and 10-40% / 40-100% (open markers) in p-Pb collisions at √ s NN = 5.02 TeV.The solid and doted lines represents the linear fit to data.The statistical and systematic uncertainties are represented by vertical bars and boxes on the measurements, respectively. [ 6 ] Figure A.1 shows the multiplicity dependence of the dN/dy and p T of K * 0 and φ mesons as a function of y in p-Pb collisions at √ s NN = 5.02 TeV.The dN/dy and the p T increase with multiplicity for a given rapidity interval.The dN/dy shows a weak rapidity dependence with large uncertainties, and suggesting a more pronounced dependence for events in the highest multiplicity class (0-10%).The p T shows a flat behavior as a function of rapidity for all multiplicity classes in the measured rapidity interval.Similar behavior in the average transverse kinetic energy as a function of rapidity for strange hadrons was reported in Ref.[16]. Figure A. 1 : Figure A.1: The p T integrated yield (dN/dy) (top panels) and mean transverse momentum ( p T ) (bottom panels) for K * 0 (left panels) and φ (right panels) mesons as a function of y measured for the multiplicity classes 0-10%, 10-40% and 40-100% in p-Pb collisions at √ s NN = 5.02 TeV.The statistical uncertainties are represented as bars whereas boxes indicate the total systematic uncertainties on the measurements. Table 1 : Relative systematic uncertainties for K * 0 and φ yields in p-Pb collisions at √ s NN = 5.02 TeV.The quoted relative uncertainties are averaged over p T in the range 0.8-16 GeV/c.The total systematic uncertainty is the sum in quadrature of the uncertainties due to each source.
9,658.6
2022-04-21T00:00:00.000
[ "Physics" ]
Complete Genome Sequence of Staphylococcus epidermidis CSF41498 Staphylococcus epidermidis CSF41498 is amenable to genetic manipulation and has been used to study mechanisms of biofilm formation. We report here the whole-genome sequence of this strain, which contains 2,427 protein-coding genes and 82 RNAs within its 2,481,008-bp-long genome, as well as three plasmids. S taphylococcus epidermidis is a commensal bacterium and opportunistic pathogen that generally colonizes the human skin and mucous membranes (1,2). Because of its prevalence, S. epidermidis is one of the major causes of nosocomial infections due to its ability to form biofilms on medical implant devices (3). These biofilm infections are extremely difficult to treat, owing to their inherent resistance to the host immune system and antibiotics (4)(5)(6)(7). CSF41498 is a biofilm-forming strain originally isolated from cerebrospinal fluid in Dublin, Ireland (8). This strain is susceptible to several antibiotics, including erythromycin, trimethoprim, tetracycline, kanamycin, and chloramphenicol, facilitating the introduction of plasmids and generation of marked mutations. Furthermore, genetic manipulation of this strain is possible due to the ability to move DNA via electroporation and transduction using bacteriophages ⌽A6C and ⌽187 (9, 10). In addition, CSF41498 has been used to study multiple aspects of biofilm formation, making this strain and its genome sequence useful tools (8,(11)(12)(13)(14)(15)(16). For DNA extraction, S. epidermidis CSF41498 was grown on blood agar plates overnight at 37°C. DNA was then extracted using the DNeasy UltraClean microbial kit (Qiagen, Germany) per the manufacturer's directions. Sequencing was performed as previously described (17). RS II (Pacific Biosciences, USA) single-molecule real-time (SMRT) sequencing produced 116,695 reads, with an average length of 14,633 bp. The reads were assembled using HGAP2 in the SMRT Analysis portal into three polished contigs representing one chromosome and two plasmids. MiSeq (Illumina, Inc., USA) short-read sequencing produced 1,526,588 paired-end reads with an average length of 300 bp and insert size of 500 bp. These reads were mapped to the trimmed and circularized SMRT sequences using the mapper within Geneious (Biomatters, New Zealand) to correct the errors in the long-read sequencing, resulting in an average depth of coverage of 89ϫ for the chromosome and 150ϫ for the plasmids. Reads that did not map to any of the long-read contigs were assembled by the Geneious de novo assembler to form a third plasmid around 8 kb in length, with an average read depth of 850ϫ. This plasmid is smaller than the fragmentation size for the long-read sequencing, which is why there was no contig detected from the SMRT assembly; however, as verification, we found long reads that did map to the plasmid. Genes were predicted using the NCBI Prokaryotic Genome Annotation Pipeline version 4.5 (18). Data availability. The complete genome sequence of S. epidermidis CSF41498 has been deposited in GenBank under the accession numbers CP030246 to CP030249. The sequencing reads have also been deposited in GenBank under accession number SRS3935761. ACKNOWLEDGMENTS The material here has been reviewed by the Walter Reed Army Institute of Research, and there is no objection to its presentation and/or publication. The opinions or assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the true views of the Department of the Army or the Department of Defense.
749
2019-01-01T00:00:00.000
[ "Biology", "Engineering" ]
Effects in the Optical and Structural Properties Caused by Mg or Zn Doping of GaN Films Grown via Radio-Frequency Magnetron Sputtering Using Laboratory-Prepared Targets : GaN films doped with Mg or Zn were obtained via radio-frequency magnetron sputtering on silicon substrates at room temperature and used laboratory-prepared targets with Mg-doped or Zn-doped GaN powders. X-ray diffraction patterns showed broadening peaks, which could have been related to the appearance of nano-crystallites with an average of 7 nm. Scanning electron microscopy and transmission electron microscopy showed good adherence to silicon non-native substrate, as well as homogeneity, with a grain size average of 0.14 µ m, and 0.16 µ m for the GaN films doped with Zn or Mg, respectively. X-ray photo-electron spectroscopy demonstrated the presence of a very small amount of magnesium (2.10 mol%), and zinc (1.15 mol%) with binding energies of 1303.18, and 1024.76 eV, respectively. Photoluminescence spectrum for the Zn-doped GaN films had an emission range from 2.89 to 3.0 eV (429.23–413.50 nm), while Mg-doped GaN films had an energy emission in a blue-violet band with a range from 2.80 to 3.16 eV (443.03–392.56 nm). Raman spectra showed the classical vibration modes A 1 (TO), E 1 (TO), and E 2 (High) for the hexagonal structure of GaN. Introduction In the last two decades, the III-Nitride semiconductor materials have become more apparent due to their applications in optoelectronics devices, which can be tuned to different wavelengths, ranging from green to ultraviolet emission. GaN belongs to the III-Nitride group; this is a material with great potential for the present and future of the electronics industry, due to its optical, structural, and electrical properties. Gallium nitride has applications in solar cells, microwave devices, LED screens, and high-electron-mobility transistors (HEMTs) [1][2][3]. However, the principal application of GaN is the conservation of electrical energy through the replacement of incandescent light bulbs with LED technology bulbs. Recently, the GaN has also been used in applications for nuclear radiation detectors, biosensors, and nuclear batteries with high energy density/long lifetime, as well as smallscaled fabrication of pacemakers. In this way, GaN is helping to save lives [4][5][6][7][8][9]. The GaN crystallizes in the hexagonal structure, with a band gap energy of Eg = 3.4 eV. However, it should be considered that GaN wafers are very expensive, due to the methods used for the production of ingots (ammonothermal growth, and hydride vapor phase epitaxy). Thus, the GaN can be obtained via films grown on non-native substrates such as Si, SiC, and GaAs. The growth methods mostly used to obtain GaN films are metalorganic chemical vapor deposition (MOCVD) and molecular-beam epitaxy (MBE). On the other hand, obtaining this p-type GaN has been very important in device development. However, MOCVD and MBE need additional compounds to dope the GaN with Mg or Zn, which are the most common dopant elements to obtain the p-type GaN. MOCVD requires metalorganic compounds such as biscyclopentadienylmagnesium (Cp 2 Mg) or diethylzinc ((C 2 H 5 ) 2 Zn), followed by the activation technique of atoms' "low-energy electron beam irradiation (LEEBI)", which is a process applied at the laboratory level with acceleration voltages of the incident electrons at 10 kV. Therefore, this is not an adequate process for standard applications [10][11][12]. Molecular-beam epitaxy generally uses a solid source of Mg or Zn atoms during the growth process of the material [13][14][15]. Recently, radio-frequency magnetron sputtering has been used as another option for obtaining GaN films, which could be applied as buffer layers to reduce the difference in the thermal expansion coefficients between substrate and GaN. However, this technique might require the availability of GaN powders with high purity and a single-phase for use as a raw material in the targets' production. Additionally, GaN powders can be doped with Mg or Zn during the synthesis process [16][17][18][19][20][21]. This work presents the effects of doping with Mg or Zn in the structural, and the luminescent properties of GaN films, which were grown via radio-frequency magnetron sputtering, using laboratory-prepared targets with Mg-doped or Zn-doped GaN powders. These powders were reported by our research team in previous works [16,17,19]. The obtained films might be applied as buffer layers in III-Nitride biosensors, pacemakers, and micro-electromechanical systems (MEMS). It is also important to mention that GaN films doped with Mg or Zn were obtained for the first time using targets prepared with this process. GaN films doped with Mg or Zn showed good adherence to the non-native substrate (silicon), whose structural characteristics were obtained by X-ray diffraction (XRD), and transmission electron microscopy (TEM). Surface morphology was obtained by scanning electron microscopy (SEM), while its elemental analysis was obtained via energy dispersive spectroscopy (EDS). Mg or Zn incorporation in the GaN films was demonstrated by X-ray photoelectron spectroscopy (XPS). The film thickness was found using profilometry, and its resistivity via the four-point probe measurement method. Optical analysis was carried out using Raman spectroscopy and photoluminescence (PL). Materials and Methods GaN films doped with Mg or Zn were obtained via radio-frequency magnetron sputtering over silicon substrates at room temperature (the substrate was not heated), using laboratory-prepared targets with Mg-doped or Zn-doped GaN powders. Material Synthesis for the Laboratory-Prepared Targets The laboratory-prepared targets were elaborated using Mg-doped or Zn-doped GaN powders, whose synthesis processes were reported by our research group in previous works (Gastellóu et al. [16,17,19]). A brief description of the process used to obtain Mg-or Zn-doped GaN powders is provided. To synthesize the Zn-doped GaN powders, 7.477 g (107.20 mmol) of metallic gallium (99.999% pure), and 0.075 g (1.15 mmol) of metallic zinc, which was approximately 1% in the mixture, were used as reagents. The synthesis of Mg-doped GaN powders, 5.874 g (84.24 mmol) of metallic gallium, and 0.059 g (2.44 mmol) of metallic magnesium, which was also approximately 1% in the incorporation, were used as reagents. Additionally, anhydrous ammonia (NH 3 ) was used as the source of nitrogen atoms in both processes. The processes begin by placing the metallic gallium and metallic zinc (or metallic magnesium) in an alumina boat, which was preheated on a hot plate at 200 • C and two hours of manual agitation was performed. After this time, the obtained metallic liquid solution was placed inside a CVD furnace, which was purged with an N 2 flow of 150 sccm (processed at room temperature), to reduce the residual oxygen. Then, an N 2 flow of 50 sccm was opened, and the temperature was increased until reaching 20 • C above the fusion temperature of the doping element, which was carried out to ensure the diffusion of the Zn atoms (or Mg atoms) in the gallium by supersaturation of the liquid solution. Table 1 presents the fusion temperature of the doping elements. Once the homogenization temperature was stabilized (Table 1), the liquid solution was homogenized for 14 h in an N 2 flow of 50 sccm. Then, the N 2 flow was closed, and an NH 3 flow of 150 sccm was opened to make the nitridation process; the temperature was then increased to 1000 • C, where the liquid solution remained for two hours. When this time elapsed, the temperature was decreased to 600 • C, where the NH 3 flow was closed, and the N 2 flow of 50 sccm was opened again, while the temperature continued decreasing until it reached the room temperature, ending the process. Thus, using this process, 8.34 g (164.49 mmol) of Zn-doped GaN powders were synthesized, with a nitrogen incorporation of 0.786 g (56.14 mmol) after the nitridation. Additionally, 6.34 g (115.65 mmol) of Mg-doped GaN powders were also synthesized, with a nitrogen incorporation of 0.405 g (28.97 mmol) after the nitridation process. Mg-or Zn-Doped GaN Films Once the Mg-or Zn-doped GaN powders were obtained, a tableting process was used to prepare the targets for the films' deposition. First, an agate mortar was used to finely grind the powders and were then lubricated with 0.5 mL of methanol to obtain a mixture. Afterward, the mixture was placed in a Blackhawk SP25B powder press to obtain the target (with a pressure of 10 ton/cm 2 ). When the powders were compacted, the target was removed from the press to be individually sintered. The targets were sintered inside a conventional CVD furnace using an N 2 flow (150 sccm) at 900 • C for one hour to reduce the oxygen non-intentional impurities introduced with the methanol. The above process was repeated until the targets had the required hardness for the deposition by radio-frequency magnetron sputtering. Mg-or Zn-doped GaN films were deposited via radio-frequency magnetron sputtering at room temperature using an Intercovamex Sputtering System V1, (with a target size of 25.4 mm in diameter, and 5 mm in thick). A separation distance between the substrate and the target of 40 mm was applied. A chamber vacuum attained a pressure of 2 × 10 −6 Torr before the layer growth. An N 2 flow was used during sputtering, as well as a RF power of 50 W and a gas pressure of 25 × 10 −3 Torr were kept during the sputtering deposition. It also required a longer deposition time of 8 h to grow a thick layer [20,22]. Additionally, silicon (100) substrates were used to remove organic residues, along with the conventional cleaning of solvents and solutions. A diagram of this process of obtaining the Mg-or Zn-doped GaN films is shown in Figure 1a, while Figure 1b shows the targets prepared using Mg-doped, or Zn-doped GaN powders (Gastellóu et al. [17,19]). Characterizations Mg-or Zn-doped GaN films were characterized by X-ray diffraction (XRD) measurements using a Bruker AXS D8 Discover Diffractometer at room temperature, equipped with a wavelength (Cu Kα) of 1.5406 Å. The XRD patterns were obtained in a range from 25 • to 60 • , with step-size and step-time of 0.02 • and 1 s, respectively. The X-ray tube operation conditions were 40 kV and 40 mA. The surface morphology and elemental analysis (SEM/EDS) of the Mg-or Zn-doped GaN films were obtained using a JEOL JIB-4500 (SEM+FIB). The profilometry was made using a Dektak 150 Surface Profiler. Photoluminescence spectra (PL) were measured at room temperature with an excitation wavelength of 243 nm and a 310 nm filter using a fluorescence spectrophotometer Hitachi F-7000 FL with a 150 W xenon lamp. The Raman-scattering characterizations of the Mg-or Zn-doped GaN films were obtained using a Horiba Jobin Yvon HR-800 Micro Raman spectrophotometer. X-ray photoelectron spectroscopy (XPS) measurements were taken with an Escalab 250Xi Brochure, using an energy range from 0 to 1400 eV. Four-point probe measurements were carried out using a Lucas Signatone QuadPro Resistivity System. Figure 2 shows the main peaks in X-ray diffraction patterns for the Mg-doped GaN films (Figure 2a), and Zn-doped GaN films (Figure 2b), as well as the ICDD card 00-050-0792 for hexagonal GaN (Figure 2c), which was used to compare the different X-ray diffraction patterns. The peaks observed in Figure 2 were indexed in the ICDD card 00-050-0792 for GaN. The a peak was located in the plane orientation (100), b had a localization at (002), c had the highest intensity at (101), d peak was located at (102), while e peak was located at (110). The lattice constants for the hexagonal structure were a = 3.18 Å and c = 5.18 Å, with a ratio c/a = 1.62. The FWHM average measurements for the c peak (101) of the X-ray diffraction patterns of Figure 2, had a value of 1.15 • . The c peak broadening of the X-ray diffraction patterns could be produced for two reasons. First, this deposition technique does not produce good crystalline quality in the layers at room temperature; however, when the substrate is heated, the crystalline quality in the films could get better [22]. Second, the presence of nano-crystallites hinders the crystalline quality [16]. On the other hand, Figure 2a,b does not show a significant difference between their diffraction angles. This similarity might show that the incorporation of the magnesium or zinc atoms into the GaN lattice did not affect its crystalline structure, due to the approximate atomic radius of the dopant elements [23]. The percentage similarity of Mg and Zn compared to gallium is 88% and 92%, respectively. Using the ICDD PDF-4+ 2018 software and the Debye-Scherrer equation, the crystal size was computed, finding an average of 7 nm for all GaN films. Figure 2a,b, showed small peaks in GaN in the (102), and (110) planes, which could be related to growth temperature, a poor nitrogen incorporation, and the introduction of probable oxygen non-intentional impurities into the crystalline lattice during the deposition by sputtering. Figure 3 shows the superficial morphology for the Zn-doped GaN films (Figure 3a), and Mg-doped GaN films (Figure 3b). SEM micrographs of the grown films by radiofrequency magnetron sputtering demonstrated good adherence to the substrate, as well as homogeneity. Figure 3a showed an irregular grain surface with a grain size average of 0.14 µm. Figure 3b also demonstrated an irregular grain surface with a grain size average of 0.16 µm, where the irregular grains could be formed by a crystallite agglomerate with a size average of 7 nm, as can be calculated from the XRD analysis using the ICDD PDF-4-2018 software and the Debye-Scherrer equation. Mg-or Zn-doped GaN films, grown by sputtering using laboratory-prepared targets of Mg-and Zn-doped GaN powders, had a better adherence and homogeneity compared to grown films obtained via nitridation of GaAs substrates or MOCVD [18]. EDS elemental analysis corresponding to Figure 3a,b is shown in Figure 3c,d, respectively. These spectra only demonstrated an elemental contribution of gallium, nitrogen, and a small elemental contribution of oxygen in the Mg-or Zn-doped GaN films. It is important to mention that the residual oxygen's non-intentional impurity might be related to the hysteresis effect, which can occur in the early stages of the sputtering deposition due to system instability. This residual oxygen could affect the optical properties of the GaN films, producing emission peaks in the red luminescence (RL). EDS spectra did not show the presence of the incorporation of magnesium or zinc, which could be due to its small atomic percentage (1.0 mol %), compared to other elements. Additionally, the overall accuracy of the equipment used was approximately 1 weight percent (wt%) with a sensitivity of approximately 0.1 weight percent (wt%) [17,19]. EDS elemental analysis does not show the contributions of other impurity atoms such as carbon, or silicon belonging to the substrate. Figure 4 shows the X-ray photo-electron spectroscopy (XPS) of the Mg-or Zn-doped GaN films. Both samples showed similar behavior in the peaks for high energies of Ga 2P 3/2 and Ga 2P 1/2 with respective values of 1117.75 and 1144.61 eV, which has a difference of 1.35, and 1.41 eV for the L 2 , and L 3 levels of the element in its natural form, respectively (Figure 4a). The atomic percentage for Ga 2P was approximately 51.7%. Additionally, the energy peak related to N 1s in both samples was obtained at 398.43 eV for the K level (Figure 4b), with an atomic percentage of 31.1%. This characterization technique showed the presence of a very small amount of magnesium (2.10 mol%), and zinc (1.15 mol%) for the two film types, which could indicate the incorporation of magnesium or zinc into GaN films. Figure 4c shows the Mg 1s peak, with a binding energy of 1303.18 eV, and an energy difference of 0.18 eV for the K level of the element in its natural form. Figure 4d, shows the Zn 2P 3/2 peak with a binding energy of 1024.76 eV, and a difference of 2.96 eV for the L 3 level, which might be related to the Zn incorporation into GaN. The electron density decreasing the base element (Ga), shifts the peak positively. The oxygen non-intentional impurity showed a peak for O 1s, which corresponded to K level, and had a binding energy of 531.54 eV, and an atomic percentage of 13.9% (Figure 4e). Figure 5a, shows uniform growth with an interplanar spacing of 2.59 Å (a picture of the interplanar spacing measurement is shown in the Figure 5 box). Additionally, Figure 5a shows the GaN polycrystalline that was obtained for the hexagonal structure. Figure 5b shows the electron diffraction pattern for the sample of Figure 5a (002), which demonstrates the GaN hexagonal structure and agrees with the results obtained in Figure 2 for the (002) crystalline orientation. To measure the thickness of the Mg-or Zn-doped GaN films, the profilometry technique was used. Before making the deposition of the Mg-or Zn-doped GaN films, carbon tape was placed as steps on the substrates to obtain the correct thickness by profilometry measurements. Once the Mg-or Zn-doped GaN films were deposited, the carbon tape was removed, and the sensor head was placed on the film at a short distance from the step. These measurements had a negative value range due to the sensor head dropping down the step. Figure 6 shows the average thickness obtained for the GaN films. Figure 6a presents a gradual step and thickness of 6.6 µm for the Mg-doped GaN films. Figure 6b demonstrates an abrupt change in the step for the Zn-doped GaN films as compared to Figure 6a Figure 7 shows the PL spectrum for the Mg-doped GaN films (black line), which was decomposed into four components. The a emission peak consists of a shoulder located at 3.44 eV (360.83 nm-UV region), which corresponds to the band-to-band transition for the GaN hexagonal. In this same spectrum, the b peak has a predominant emission in a blue-violet band, with a range from 2.80 to 3.16 eV (443.03-392.56 nm). This emission is typical of Mg-doped GaN films and is related to the recombination of the deep donors of gallium vacancies occupied by magnesium atoms (Mg Ga ) with acceptors Mg Ga [17,[24][25][26]. The c emission energy is located in a range from 2.51 to 2.6 eV (494.22-477.11 nm) and presents a green luminescence (GL). This increases with excitation intensity in deep defects as Mg-O binds and also agrees with EDS elemental composition (Figure 3d). These defects might be native and also be related to the excess of gallium [24]. The point defects for thin films, such as in interstitial defects, vacancies, and nano-crystallites, could widen the peaks, which agrees with the X-ray diffraction patterns of Figure 2. In this case, oxygen non-intentional impurities could occupy lattice sites into the GaN structure [27]. An energy emission located at 2.26 eV (548.67 nm) (d peak) is related to yellow luminescence (YL), where Ga vacancies (V Ga ) and substitutional atoms of oxygen could be responsible for yellow luminescence [17,19]. On the other hand, for the spectrum of Mg-doped GaN films, the e peak presents a high red luminescence emission band with a range from 1.7 to 1.8 eV (729.41-688.88 nm), which could be due to the high incorporation of Mg in GaN films [24,28]. Figure 8 shows the PL spectrum for the Zn-doped films (red line), which was decomposed into four components. The a emission peak is in a range from 2.89 to 3.0 eV (429.23-413.50 nm), where this blue luminescence (BL) might be related to excitons bound to the Zn acceptors [19,24]. For Zn-doped GaN films, luminescence emissions, such as the red, yellow, and green bands, are less known. However, Monemar et al., demonstrated that the Zn doping introduces four acceptor-like centers in the GaN, which produced broad peaks of green, yellow, and red luminescence, in addition to the blue band [29]. The Zn-doped GaN films spectrum showed an emission energy at 2.6 eV (477.11 nm) (b peak) for the green luminescence (GL), while the c peak had an emission range from 1.8 to 2.2 eV (689.16-563.86 nm) for the yellow luminescence (YL). On the other hand, the d peak had an emission energy of 1.84 eV (674.18 nm) for the red luminescence (RL), which might be also produced by oxygen non-intentional impurities. These values would be an indicator of the obtaining of p-type GaN, as Monemar mentions. Figures 9 and 10 show the Raman spectra for the Mg-doped GaN films, and Zn-doped GaN films, respectively, which were deposited by radio-frequency magnetron sputtering. These figures show a peak with a predominant frequency for the silicon substrates at 515.1 cm −1 (TO) (graphic located in the upper right part of Figures 9 and 10). Magnifying the frequencies belonging to GaN, the classical vibration modes A 1 (TO), E 1 (TO), and E 2 (High) for the hexagonal crystal structure of GaN were identified. Figure 9 shows the E 1 (TO), and E 2 (High) modes, which are overlapping, forming a shoulder with values of 550.92 cm −1 , and 568.02 cm −1 , respectively. On the other hand, A 1 (TO) vibration mode had a value of 527.54 cm −1 , with a slight shift of 2.14 cm −1 to the right of the silicon peak. This slight shift in the phononic vibration A1(TO) might be related to the difference between the atomic radius of Mg and Ga, confirming the transport of Mg atoms from target to the films, and demonstrating the obtaining of Mg-doped GaN [17,30]. Figure 10 shows the Raman spectrum for the Zn-doped GaN films obtained by radio-frequency magnetron sputtering, in which E 1 (TO), and E 2 (High) vibration modes had values of 550.93 cm −1 , and 566.93 cm −1 , respectively, while A1(TO) vibration mode had a value of 526.04 cm −1 . A 1 (TO) frequency showed a slight shift of 1.03 cm −1 to the right of the silicon peak, which could be due to the incorporation of Zn atoms as dopant into GaN films [19,30]. Conclusions Mg-or Zn-doped GaN films were obtained via radio-frequency magnetron sputtering on silicon substrates at room temperature, using laboratory-prepared targets with Mgdoped or Zn-doped GaN powders. X-ray diffraction patterns showed the possible presence of nano-crystallites with an average of 7 nm for the GaN films, which could be related to the peaks broadening. SEM micrographs for the GaN films demonstrated good adherence to silicon non-native substrate, as well as homogeneity. X-ray photo-electron spectroscopy for the GaN films showed the presence of a small amount of magnesium (2.10 mol%), and zinc (1.15 mol%) with binding energies of 1303.18, and 1024.76 eV, respectively. Additionally, TEM micrographs demonstrated a homogeneous crystalline growth and the obtaining of a GaN hexagonal structure. The resistivity values obtained (057 Ωcm for Mg-doped GaN films, and 0.45 Ωcm for Zn-doped GaN films), are approximated to the literature values for p-type GaN films. Photoluminescence spectrum for the Zn-doped GaN films had energy emissions located in a range from 2.89 to 3.0 eV (429.23-413.50 nm), which was related to excitons bound to the Zn acceptors. On the other hand, Mg-doped GaN films showed emission in the blue-violet band with a range from 2.80 to 3.16 eV (443.03-392.56 nm), which was related to the recombination of the deep donors of gallium vacancies occupied by magnesium atoms (Mg Ga ), with acceptors Mg Ga. The results showed by XPS, TEM, resistivity, and photoluminescence might be an indicator of the obtaining of p-type samples.
5,374.6
2021-07-29T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
SMA Detection of an Extreme Millimeter Flare from the Young Class III Star HD 283572 We present evidence of variable 1.3 mm emission from the 1 to 3 Myr, spectral-type G2–G5 class III young stellar object (YSO), HD 283572. HD 283572 was observed on eight dates with the Submillimeter Array between 2021 December and 2023 May, with a total on-source time of 10.2 hr, probing a range of timescales down to 5.2 s. Averaging all data obtained on 2022 January 17 shows a 4.4 mJy (8.8σ) point source detection with a negative spectral index (α = −2.7 ± 1.2), with peak emission rising to 13.8 mJy in one 3 minute span, and 25 mJy in one 29.7 s integration (L ν = 4.7 × 1017 erg s−1 Hz−1). Combining our data for the other seven dates shows no detection, with an rms noise of 0.24 mJy beam−1. The stochastic millimeter enhancements on time frames of seconds–minutes–hours with negative spectral indices are most plausibly explained by synchrotron or gyrosynchrotron radiation from stellar activity. HD 283572's 1.3 mm lightcurve has similarities with variable binaries, suggesting HD 283572's activity may have been triggered by interactions with an as-yet undetected companion. We additionally identify variability of HD 283572 at 10 cm, from VLASS data. This study highlights the challenges of interpreting faint millimeter emission from evolved YSOs that may host tenuous disks, and suggests that a more detailed temporal analysis of spatially unresolved data is generally warranted. The variability of class III stars may open up new ground for understanding the physics of flares in the context of terrestrial planet formation. INTRODUCTION Stellar flares are extreme radiation outbursts from the surfaces of stars, analogous to solar flares commonly observed on the Sun, whereby stored magnetic energy accelerates charged particles through surrounding stellar plasma, radiating brightly across the electromagnetic spectrum (Dulk 1985;Feigelson & Montmerle 1999;Güdel 2002;Fletcher et al. 2011;Candelaresi et al. 2014).At millimeter wavelengths, stellar variability campaigns have mostly targeted nearby main sequence M-type stars, such as Proxima Centauri and AU Mic, finding extreme flaring events on timescales of 1-10 seconds (MacGregor et al. 2020(MacGregor et al. , 2021;;Howard et al. 2022).A handful of millimeter flares have also been detected from the closest Sun-like star, ϵ Eridani (Burton et al. 2022), and a range of other M and K-type stars from millimeter survey telescopes (e.g, the South Pole Telescope and the Atacama Cosmology Telescope, see e.g., Guns et al. 2021;Naess et al. 2021). Stellar flaring rates decline with age due to stellar spin-down (Davenport et al. 2019).Decades of monitoring at radio and X-ray wavelengths have shown that young T-Tauri stars are highly active, and undergo intense periods of flaring (Favata et al. 1998;Stelzer et al. 2007;Dzib et al. 2015;Forbrich et al. 2017;Vargas-González et al. 2021).A small number of flares from T-Tauri stars have been confirmed at millimeter wavelengths (e.g., Massi et al. 2002;Bower et al. 2003;Massi et al. 2006Massi et al. , 2008;;Salter et al. 2010;Mairs et al. 2019), Table 1.SMA observational setup, over the 8 observing dates.The horizontal lines separate the survey in which HD 283572 was included as a target (2021)(2022) from the dedicated follow-up campaign for variability (2023).All times represent time on source.The value of τ225 represents the average opacity at 225 GHz during observations. Here we report an extreme millimeter brightening event associated with HD 283572 detected with the Submillimeter Array (SMA) found serendipitously during a survey for class III circumstellar dust disks.We interpret this as millimeter stellar activity, amongst the first reported for a class III YSO.We present the SMA observations and analysis in §2, our discussion in §3, and conclusions in §4. SMA data calibration We observed HD 283572 with the SMA (Ho et al. 2004) on the dates listed in Table 1 between 2021-2023, over "tracks" of 2-10 hours with 6 operable antennas.For all tracks, the receivers were tuned to an LO frequency of 225.1 or 225.5 GHz (λ = 1.33 mm); providing frequency coverage from 209. upper sideband).The SWARM correlator processed the 48 GHz available bandwidth, all with channel spacing of 140 kHz.Standard calibrator sources and setups were used with each track.Observations were conducted in two configurations (COM and SUB, with angular resolutions of 2-3 ′′ and 4-5 ′′ respectively).Two distinct modes of time sampling were employed.The first mode used integrations of 14.8 or 29.7 secs over 2-3 mins, 2-3 times per hour over 9-10 hr time spans (cycling between different sources and phase calibrators).The second mode (DDT follow-up) used shorter integrations of 5.2 secs over ∼20 mins between phase calibrator observations, for 2-3 hrs. All data (see §5) was re-binned by a factor of 32 to reduce data sizes and speed up calibration (appropriate for continuum analysis).We converted the raw data to the Common Astronomy Software Applications (CASA; CASA Team et al. 2022) measurement set format with the PYUVDATA (Hazelton et al. 2017) SMA reduction pipeline1 in CASA pipeline v6.4.1.We manually flagged the limited number of narrow interference spurs, as well as the outermost 2.5% "guard-band" channels from each spectral window.After calibration, the 'corrected' data were spectrally averaged to 4 channels per window to expedite our analysis with mstransform.All images were produced using natural weighting (to maximize signalto-noise) with CASA's tclean task. Per-track variability We image each of the 8 tracks separately: observations of HD 283572 in tracks T1, T2, and T4-T8 show no significant emission (see Fig. 1, left), neither in their imaged data nor in visibility space.Combining data from all 7 tracks similarly show no significant emission, with an estimated RMS noise of 240 µJy beam −1 , and no peaks exceeding ±4σ anywhere. For T3, however, significant emission was detected in both the image and visibility fit (see Fig. 1, right).Imaging analysis (via CASA's imfit) returns a peak emission of 4.2±0.5 mJy beam −1 centered on HD 283572's location.Visibility fitting (via CASA task uvmodelfit with a point source model) returns 4.4±0.5 mJy, centered at ∆RA = 0.42 ′′ , ∆Dec = −0.67 ′′ , consistent with the position of HD 283572 given the astrometric uncertainty The non-detection in T2 and T4, and strong detection in T3 implies that the stellar activity rose significantly in the three days between 2022 Jan 14 and Jan 17, and fell on an uncertain timescale by 2023 Mar 26.There is no evidence of periodicity on the time frame of days given the non-detections T4-T8, nor on weeks-months timescales given the sparse coverage in observations. Per-scan variability We further examine the millimeter variability within each track.To aid our initial comparison, we fit point source fluxes (with fixed offsets ∆RA = 0.4 ′′ and ∆Dec = −0.7 ′′ , consistent with the best-fit from analysis of T3) to the visibilities in all 8 tracks, binned at either 30s, 60s, and 180s resolution, an approach consistent with the studies of MacGregor et al. ( 2020) and Burton et al. (2022).These represent time-binning by factors of 1-6 (for T1 and T3); 2-12 (for T2); and 6-36 (for T4-T8).Since this represents significant re-binning for tracks T4-T8 which were observed at much higher temporal resolution, we also fit fluxes for these 5 tracks at 5.2s (full) and 10.4s temporal resolution, finding no significant emission peaks.Whilst fixing the offset may reduce the significance of individual brightening events since the position is consistent with HD 283572's location (and the peak of emission detected in T3), we expect this to still robustly detect significant brightening events.We show in Fig. 2 the resultant lightcurve for T3.By comparing these fitted flux values with their uncertainties on timescales of 30s-180s, we found one significant brightening event, evident throughout 'segment 13' (S13) of T3, with (sub-)segments of other tracks being consistent with noise. T3 segment 13 (T3:S13) spanned the time interval 2022 Jan 17, 08:45:52.9-08:48:51.0UTC.Imaging T3:S13 using the tclean task (to 2 σ threshold with a central 5 ′′ radial mask), we find (using imfit) that the total (unresolved) flux co-located with HD 283572 is 13.8±1.8mJy, a detection of the brightening event at the 7.7σ level.Imaging track T3 excluding the T3:S13 segment, we find (using imfit) an unresolved flux of 3.8±0.4mJy.Both images are presented in Fig. 3. Fitting again for the upper and lower sidebands of the data separately, we find the emission during T3:S13 to have a negative spectral index of α F1 = −3.0±2.0 and α T3 minus F1 = −1.0±1.2 in the full track T3 excluding T3:S13 data, consistent with this remaining fixed throughout the entire track.Although cross-hand polarization data were not collected (XY and YX), we are able to leverage the changing parallactic angle of the source to set a constraint on the total linear polarization fraction (p Q+U ) during the T3 track of p Q+U = 0.21 ± 0.21, consistent with a null detection.We measured the T3:S13 fluxes in the XX and YY polarization channels as F XX,T3:S13 = 12.6±2.3mJy and F YY,T3:S13 = 18.7±2.5mJy, also consistent with a null detection. In Fig. 4 we present images of all six scans during T3:S13, imaged at their native 29.7 sec resolution.As can be seen, there is emission present at the location of HD 283572 in integrations 3 and 6.Applying the same imfit routine to these images, we measure brightness peaks of 25.0±3.8mJy and 23.0±4.8mJy respectively (and later refer to these two events as 'F1A' and 'F1B' respectively).This method only yielded 2-4σ significant fits for the T3:S13 integrations 1, 2, 4 and 5 separately.Whilst these results may suggest that the majority of the observed flux over the 3 min cadence T3:S13 arose from just two 29.7 sec integrations, our data are unable to determine the underlying variability during F1.Further, whilst integrations 1, 2, 4 and 5 have an average flux 11.5±2.3mJy, and spectral index α = −0.3±3.2, the spectral indices during events F1A and F1B are α F1A = −11±5 and α F1B = −0.5±5.0 respectively, both negative albeit with large uncertainties and our data are thus consistent with having a fixed spectral slope during event F1. Summary of analysis We summarize four significant findings.First, during an active-period (T3) HD 283572's emission was significantly elevated for at least 9 hours (which we refer to as HD 283572's 'active-quiescent' level).Second, during this active-period in the 3-minute segment T3:S13, HD 283572's emission substantially increased, (which we refer to as 'F1').The difference in flux of 'F1' and the active-quiescent level (10.0±1.8 mJy) is statistically significant (5.5σ), and represents a flux enhancement by a factor of around 3.6.Third, in HD 283572's activequiescent period, its flux rose by ≳16× over the inactive period, and by ≳60× during F1.Fourth, the millimeter spectral index is consistently steep and negative during F1 and the active-quiescent period.We tabulate all observed and derived flare properties in Appendix A, Table 2. Millimeter circumstellar dust emission around class III stars is typically faint (<1 mJy at the distances of nearby (∼ 140 pc) star-forming regions), has a positive spectral index and is stable in flux on timescales spanning years (e.g., Wyatt 2008;Matthews et al. 2014;Hughes et al. 2018;Lovell et al. 2021), whereas emission from stellar flares can show enhancements on timescales of seconds to hours, with steep negative spectral indices.The stochastic millimeter emission of HD 283572 is most plausibly explained as resulting from stellar activity. DISCUSSION In assessing the flare frequency rate, we note that the SMA observed HD 283572 for a total on-source time of 10.2 hours, within which we detected one (sparsely sampled) active-quiescent period and one F1-type event. From this we estimate the rate of active periods as 0.001 − 0.9 day −1 , during which the rate of F1-type flare events is 0.005 − 0.1 hour −1 , accounting for the fact that the sparse sampling may have missed the beginning and ending of these events (for the active-quiescent period, the event may extend anytime between the end of T2 to the start of T4; 463 days, and for F1, from the end of T3:S12 to the start of T3:S14; 58 mins). Data from the Karl G. Jansky Very Large Array (VLA) Sky Survey (VLASS; Lacy et al. 2020) shows radio emission associated with HD 283572 in 2019 and 2021 (which we present in detail in Appendix B to investigate HD 283572's radio emission properties, outside of our SMA-observation window).We determine the 3 GHz radio luminosity density of HD 283572 from VLASS epoch 1.2 and 2.2 as L R = 4 × 10 16 erg s −1 Hz −1 and L R = 9 × 10 15 erg s −1 Hz −1 respectively.We find HD 283572's X-ray luminosity (log L X ∼29.4 erg s −1 ; Favata et al. 1998;Pye et al. 2015) to be a factor of 3-12 above the Güdel-Benz relationship (well within the scatter of Guedel & Benz 1993;Benz & Guedel 1994, i.e., L X /L R = κ × 10 15.5±0.5 ), assuming κ = 0.03, as shown by e.g., Dzib et al. (2013Dzib et al. ( , 2015) ) to fit populations of YSOs in Ophiuchus and Taurus.Consistency with the Güdel-Benz relationship suggests that HD 283572 is a magnetically active young star.The millimeter spectral indices of T3, F1, and T3 excluding F1 are all negative, with α T3 = − 2.7±1.2,α F1 = − 3.0±2.0and α T3 minus F1 = − 1.0±1.2.By extrapolating the flux implied by these slopes (and accounting for the typical ∼1−10 GHz turnover in the gyro-synchrotron spectrum, see e.g., Güdel 2002) during a flare HD 283572's 3 GHz emission should plausibly reach 10s-100s of mJy.We do not observe fluxes this high in the VLASS data, suggesting HD 283572 was in an inactive period during these measurements.We can constrain the dominant emission source probed by the VLA with our SMA measurements by measuring the millimeter-radio spectral index, anchored by the SMA upper-limit of 0.24 mJy.We estimate α as <−0.2, <−0.5 and <−0.7 respectively for VLASS 2.2, 1.2 and a 5 GHz 1989 observation of HD 283572 (O'Neal et al. 1990) by assuming a power-law flux scaling of F ν ∝ ν α for the three VLA measured 0.5, 2.1 mJy (3GHz) and 3.3 mJy (5GHz) fluxes.These spectral indices are consistent with non-thermal gyro-synchrotron/synchrotron emission (Güdel 2002), which at millimeter wavelengths is likely optically thin (Klein & Chiuderi-Drago 1987).This emission originates from electrons gyrating along stellar magnetic field lines after magnetic reconnec-tion events in stellar magnetospheres and energize nonthermal particles on rapid timescales. Emission sources We associate a brightness temperature with the SMA emission via T b ≈ 1 2k b ( d r ) 2 λ 2 ∆S (see e.g., Güdel 2002, equation 3, for distance d, emission radius r, wavelength λ, and flux enhancement ∆S), by assuming emission emanates from length scales ranging from a few Earth radii (comparable to Solar active regions) to a few Solar radii (comparable to flaring loops).We find that event F1 is associated with a brightness temperature in the range T b ≈ 10 8 − 10 10 K, in accordance with incoherent gyro-synchrotron/synchrotron emission. Synchrotron and gyro-synchrotron emission have different millimeter properties, however we are unable to distinguish between these with our observations. The strength of the magnetic field that we estimate as needed for gyro-synchrotron emission to radiate at 225 GHz however seems too high.We estimate this by associating the SMA observation frequency with a harmonic (s) of the gyro-frequency of emission, i.e., ν 0 = 2.8 × 10 6 Bγ 2 Hz (for the peak spectral frequency ν 0 ; Güdel 2002).For gyro-synchrotron, s=10−100 and γ≲2−3, whereas for synchrotron, s>100 and γ≫1.For example, in a gyro-synchrotron case (γ=3) we find B≳9 kG, whereas in a synchrotron case (γ=10) we find B≲1 kG.The lower value appears more consistent with expectations from low-intermediate mass YSO B-fields (see e.g., Folsom et al. 2016), and thus may instead imply that these HD 283572 SMA observations probed a synchrotron emission mechanism. Comparison with published millimeter flares There are few published detections of millimeter flares that constrain their luminosities, timescales, spectral slopes, and polarization.Here we briefly compare/contrast HD 283572's stellar activity with available published constraints. Flares from single stars The details of HD 283572's millimeter emission appear different than those reported for main sequence flares from single stars, such as those of AU Mic, Proxima Centauri, ϵ Eri and the Sun (Krucker et al. 2013;MacGregor et al. 2018MacGregor et al. , 2020;;Burton et al. 2022, with millimeter luminosity densities of 10 13 -3 × 10 14 erg s −1 Hz −1 , 10s of seconds flare timescales, and for the Sun and ϵ Eri, positive/flat spectral indices).HD 283572's flare luminosity density is a factor of ∼1000× higher than the brightest of those, ϵ Eri (based on F1/F1A), persisted over ≳9 hours (∼1000× longer) and had a negative spectral index throughout. The Orion-based class III YSO, GMR-A showed a powerful stellar flare (Bower et al. 2003), with properties more closely matching those of HD 283572.Nevertheless, the power (νL ν ) associated with GMR-A was around 10× higher than what we report for HD 283572, and its spectral slope was found to change over time (whereas we find no evidence of variability in HD 283572's spectral slope). Flares from binary/multiple stellar systems A number of YSO millimeter/radio flares have been inferred to result from periodic magnetospheric interactions within stellar multiples (e.g., V773 Tau, DQ Tau and JW 566, see Massi et al. 2006;Salter et al. 2010;Mairs et al. 2019, respectively, all class II systems, with the flaring periodicity of V773 Tau and DQ Tau being directly tied to known orbital periastron-passages).In each case, their peak luminosity densities reached comparable/higher levels than HD 283572 (by a factor of a few to ≈ 100), their event times spanned hours (although JW 566 is limited by the JCMT cadence of several days), however their spectral indices were either positive (DQ Tau), flat (JW 566) or unreported (V773 Tau, although Massi et al. 2006, infer this to be synchrotron emission which would have a negative spectral index). The RS Canum Venaticorum (RS CVn) variable binary star, σ Gem showed a millimeter flaring event in 2004 (Brown & Brown 2006).σ Gem presented nondetections over many days preceding its flare, became active over a period of several hours (rising to a peak lasting only minutes, with a power lower than that of HD 283572's by a factor of 7), returned to a detected active-state, and then in subsequent observations was undetected.This light-curve in particular has a temporal profile similar to HD 283572.Since the σ Gem measurements lack polarization or spectral index constraints, we are unable to compare their properties further. Detections from cosmology telescopes Bright stellar flares have been measured during deep millimeter observations with wide-field cosmology telescopes (e.g., the South Pole Telescope, SPT, and the Atacama Cosmology Telescope, ACT, see Guns et al. 2021;Naess et al. 2021, respectively).These surveys detected 90/150/220 GHz positive/flat spectral index sources with L ν ≈ 10 16 − 10 19 erg s −1 Hz −1 that are associated with stars, over timescales spanning minutesdays.These luminosity densities are consistent with HD 283572's, however the positive/flat spectral indices are inconsistent.The SPT/ACT cadences make it difficult to compare with HD 283572's light-curve. Is HD 283572 a binary? Overall, HD 283572's millimeter activity appears to more closely match systems with companions, despite HD 283572 having no evidence of binarity (Krolikowski et al. 2021).We however cannot rule out the possibility that HD283572 is in an unresolved binary system with interactions with such an unresolved companion having triggered flares/stellar activity.HD 283572's Gaia RUWE (0.925) implies that any binary companion would need to be low-mass and on a short-period orbit, which could easily have gone undetected given the star's large rotational broadening (i.e., v sin i = 110±20 km s −1 , see e.g., Fernandez & Miranda 1998).HD 283572's mass of ≈1.4 M ⊙ (estimated in § 1) suggests it will become an early-F/late-A-type main sequence star, which statistically favors binarity.New observations are needed to ascertain the multiplicity of HD 283572, and then further whether a binary interaction triggers HD 283572's flares. Class III stars: ideal probes of stellar flare physics There is a rich history of centimeter-wavelength radio studies of WTTSs (which almost fully overlap with the population of class III YSOs).VLA observations have shown these stars can be radio-bright (e.g., O'Neal et al. 1990;White et al. 1992) and more recently, enhanced in both their brightness and variability versus less-evolved YSOs (see e.g.Figures 3 and 4 of Dzib et al. 2013Dzib et al. , 2015)).The increase in class III YSO radio variability make them ideal for systematic searches for millimetercounterpart flares (e.g., utilizing new Gaia population analyses, see Luhman & Esplin 2020;Krolikowski et al. 2021;Luhman 2022aLuhman ,b, 2023a,b, ,b, for which class III stars typically dominate YSO populations in star-forming regions). Our analysis suggests that temporal analyses of class III YSO millimeter observations are generally warranted to distinguish between stellar and circumstellar components.Importantly, class III stars open a window on planet formation/disk evolution after the dispersal of the bulk protoplanetary/primordial disk material.Class III YSOs can retain large reservoirs of cold dust, and/or smaller reservoirs of hot dust that trace ongoing terrestrial planet formation processes (Lovell et al. 2021;Michel et al. 2021) over timescales that fully span pre-main sequence evolution (Kenyon & Bromley 2004;Morbidelli et al. 2012).Combined with the implication that proton-rich flare events can disrupt the growth of planetary atmospheres (Tilley et al. 2019), constraints on the physics of class III YSO flares could provide key inputs to exoplanet atmospheric growth models. CONCLUSIONS We present new observations of HD 283572 with the Submillimeter Array (SMA) taken from 2021-2023, alongside VLA data taken from 2019-2021.We show that HD 283572 is variable at millimeter and radio wavelengths, and underwent one extreme millimeter brightening event on 2022 Jan 17.Our analysis suggests HD 283572's millimeter variability was due to an extreme stellar flare, during a prolonged active-quiescent period that was both preceded and followed by inactive periods.By constraining HD 283572's spectral index, brightness temperature, and linear polarization fraction, we find that the emission is most likely from gyro-synchrotron/synchrotron radiation.HD 283572's stellar activity appears broadly similar to that reported for variable binaries based on their peak luminosities, timescales/light-curves and spectral slopes.These results suggest that HD 283572's activity may have been induced by interactions with an as-yet undetected companion.Although currently a class III YSO, HD 283572 will likely evolve into a late-A/early F-type star on the main sequence, indicating that intermediate mass stars can be active at millimeter wavelengths at young ages.Millimeter observations of class III YSOs/WTTTs provide an opportunity to study the physics of stellar flares, and their implications for terrestrial planet formation/planetary atmospheric growth. SOFTWARE AND THIRD PARTY DATA REPOSITORY CITATIONS The SMA data used here are from projects 2021B-H003, 2021B-S014 and 2022B-S069 and can be accessed via the Radio Telescope Data Center (RTDC) at https://lweb.cfa.harvard.edu/cgi-bin/sma/smaarch.pl after these have elapsed their proprietary access periods (note only data that passed strict quality control checks was included in this work).The VLA data used here are from the VLASS project, epochs 'VLASS1.2'and 'VLASS2.2',and collected from the Canadian Astronomy Data Centre (CADC).The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation (NSF) operated under cooperative agreement by Associated Universities, Inc. IRADA is funded by a grant from the Canada Foundation for Innovation 2017 Innovation Fund (Project 35999), as well as by the Provinces of Ontario, British Columbia, Alberta, Manitoba and Quebec.This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia),processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Facilities: Submillimeter Array (SMA), Karl G. Jansky Very Large Array (VLA) Software: This research made use of a range of software packages, highlighted here: astropy (Astropy Collaboration et al. 2013Collaboration et al. , 2018)), CASA (CASA Team et al. 2022), pyuvdata (Hazelton et al. 2017).The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community, where the Submillimeter Array (SMA) is located.We are most fortunate to have the opportunity to conduct observations from this mountain.We further acknowledge the operational staff and scientists involved in the collection of data presented here. APPENDIX A. DERIVED FLARE PROPERTIES We collate in Table 2 the observed and derived properties of HD 283572 during our SMA observations, including the ID associated with the emission, the average flux, time/timescale of the event/s, luminosity density, spectral index and the linear polarization fraction (in the case of the full track, and full track minus event F1, where sufficient parallactic angle coverage (during the flare) is available to unambiguously constrain this fraction).We present all values here since some derived values are not strictly independent (e.g., in the case of T3 includes data from T3:S13), whereas 'T3:S13 (F1)' is fully independent of the data in 'T3:S1-12 and T3:S14-20'. B. VLASS COUNTERPART RADIO VARIABILITY The Very Large Array (VLA) has observed HD 283572 on a number of occasions, with O'Neal et al. (1990) observing first at 5 GHz, measuring a flux of 3.29±0.28mJy.Recently, the VLA re-observed HD 283572 during the VLA Sky Survey (VLASS; Lacy et al. 2020, in two epochs at 2-4 GHz, 2.5 ′′ resolution, to a depth of ∼120 µJy).Using the multi-epoch version of SODA (SODA is designed to extract single-epoch cutouts of 'quicklook' VLASS images, as produced by Mark Lacy, for which the code is provided here: https://gitlab.nrao.edu/mlacy/vlass-vo/-/blob/main/SODA multi pos multi ep.ipynb) we sourced fits images centred on the coordinates of HD 283572 (in epochs 'VLASS1.2'and 'VLASS2.2',observed on dates of 2019 Mar 19 and 2021 Oct 28 respectively), which both showed unresolved emission in the central pixels coincident with HD 283572's location, which we present in Fig. 5.We fitted 2D Gaussian ellipses (within 5 ′′ of HD 283572's location) using ASTROPY GAUSSIAN2D, and found HD 283572 to host unresolved integrated fluxes of 2.13±0.13mJy and 0.49±0.13mJy (epochs 1.2 and 2.2 respectively).Our analysis demonstrates HD 283572 is highly variable at radio wavelengths on 2-year timescales, being dimmer by 4.3× in 2021 (the immediate run-up to our SMA observations) versus 2019. Figure 1 . Figure1.Left: cleaned image of the combined SMA measurement sets from all tracks in which no significant emission is detected.Right: cleaned image of the SMA measurement set for track T3, in which a significant point source is present at the location of HD 283572.The noise is lower in the image with no detection (RMS combined =0.24 mJy beam −1 than the image with the detection (RMST3=0.48mJy beam −1 ).Contours are at ±4, 6 and 8σ levels.SMA beams are in the lower-left. of the SMA data.The significance of the detection in the single track T3 image is 8.2σ (and 8.8σ for the visibilitybased fit).Fitting for the upper and lower sidebands (209.1-221.1 GHz and 229.1-241.4GHz, respectively) of the SMA data independently, we determine a spectral index of α T3 = −2.7±1.2,where α := ∂ log F ν /∂ log ν. Figure 2 . Figure 2. Flux versus time for Track T3, at 29.7 s (black) and 180 s (orange) resolution, including the track-averaged flux (pink, dashed).Shown are the 20 regions we refer to as segments from S1 to S20.All other tracks show data consistent with noise, whereas this track shows persistent elevated emission, with a significant brightening event during S13. Figure 3 . Figure 3. Left: cleaned image of T3 with T3:S13 excluded in which a bright point source is present at HD 283572's location.Right: cleaned image of T3:S13 on its own, in which an even brighter point source is present at HD 283572's location.The rms in the left and right images are 0.64 mJy beam −1 and 2.1 mJy beam −1 respectively, and so both are significant detections, albeit at different flux levels.Contours are at ±4 and 6σ levels.SMA beams are in the lower-left.In §3.1 we first discuss the derived millimeter/radio properties of HD 283572 implied by our observations, compare these with the literature of other millimeter flares in §3.2, and explore future implications in §3.3. Figure 4 . Figure 4. From left-right, then top-bottom: maps of the 29.7 integrations during Track T3, segment 13 (T3:S13; when the event F1 was detected).Image rms values range from 3.8-4.8mJy beam −1 .Integrations 3 and 6 show emission at the 23mJy and 25mJy level respectively coincident with HD 283572's location.Contours are at ±4σ levels.SMA beams are in the lower-left. Figure 5 . Figure 5. VLASS 'Quick Look' images (left: epoch 1.2; right: epoch 2.2).In both images, the coordinates are centred on the location of HD 283572.Clear in 1.2 is the detection of a bright point source, whereas in 2.2 the significance is much reduced.The rms in both images is 0.125 mJy beam −1 .Contours are at ±4, 8 and 12 σ levels.VLA beams are in the lower-left. Table 2 . Flare properties based on all observations of HD 283572, including the average flux over the observation timescale, associated luminosity density (at the distance to HD 283572) Lν , spectral index α, and (where this can be measured) the linear polarization fraction pQ+U .Lizhou Sha and Alexander Binks for discussions on TESS data and its analysis.The authors thank the anonymous referee for useful comments which substantially improved the content and clarity of this work.
6,785.8
2024-02-01T00:00:00.000
[ "Physics" ]
Classification of Pneumonia images on mobile devices with Quantized Neural Network This paper presents an approach for the classification of child chest X-ray images into two classes: pneumonia and normal. We employ Convolutional Neural Networks, from pre-trained networks together with a quantization process, using the platform TensorFlow Lite method. This reduces the processing requirement and computational cost. Results have shown accuracy up to 95.4% and 94.2% for MobileNetV1 and MobileNetV2, respectively. The resulting mobile app also presents a simple and intuitive user interface. everywhere, but it is more prevalent in South Asia and sub-Saharan Africa (World, 2016). Chest X-rays are often used to assess cases of pneumonia and are the most commonly used diagnostic tests for chest-related diseases. A very small dose of ionizing radiation is used to produce breast imaging . Pneumonia causes a pulmonary consolidation, meaning that the pulmonary alveoli are full of inflammatory fluid, instead of air (Iorio, et al., 2018). The image identification of pneumonia, as shows in Figure 1, is related to the opacities seen on the radiography. Normal lungs exhibit darker parts near the spine (bronchi filled with air (Kunz, et al., 2018)), whereas abnormal lungs show lighter (opaque) patches, as alveoli are filled with fluid. The low accuracy in the diagnosis of pneumonia may lead to excessive prescription of antibiotics, which is harmful to patients, and is also a cause of inventory waste. Antibiotics also kill beneficial bacteria, causing unintended health problems (Kurt, Unluer, Evrin, Katipoglu, & Eser, 2018). Moreover, the excessive use of antibiotics may lead to the proliferation of drug resistant bacteria. Considering this scenario, computational systems capable of providing fast and accurate Pneumonia diagnosis are of great importance and are becoming increasingly common (Manogaran, Varatharajan, & Priyan, 2018). Used as an aid tool, they can minimize errors (Malmir, Amini, & Chang, 2017), while screening potential infected patients. A recent trend in classification is the use of deep learning techniques (especially Convolutional Neural Networks -CNN's) that can deliver high classification accuracy at the expenses of high computing cost. To reduce this cost, several quantization schemes have gained attention recently, with some focusing on quantization of weight and others focusing on the activation quantizations (Choi, et al., 2018). As a result, extensive research on weight quantification and activation to minimize CNN's computing and storage costs has been conducted, making it possible to effectively host such solutions on platforms with limited resources (for example, mobile devices) (Choi, et al., 2018). This paper describes a mobile device system capable of classifying children's chest Xray images into two classes: Pneumonia and Normal. Samples from a pre-trained CNN are subject to a quantization stage through the TensorFlow Lite platform (Jacob, et al., 2018), considerably reducing the computational cost and processing times. The proposed method uses two pre-trained neural networks, known as MobileNetV1 (Howard, et al., 2017) and MobileNetV2 (Sandler, Howard, Zhu, Zhmoginov, & Chen, 2018), for the construction of a mobile application aiming at greater mobility. As a result, fast and accurate diagnosis of childhood pneumonia, especially in remote areas with precarious conditions can be attained. This paper comprises four sections. In section 2 presents materials and methods, results and conclusion are given in Sections 3 and 4, respectively. The Proposed Method We now present the proposed methodology for the training and classification of pneumonia from x-ray images on mobile devices. Dataset We start by describing the dataset used in the experiments. The images come from the Guangzhou Women and Children Medical Center, taken from pediatric patients aged one to five years. They are all part of the routine clinical procedure . It contains 5856 chest X-ray images (anteroposterior), categorized as: Viral Pneumonia (1493), Bacterial Pneumonia (2780) and Normal (1583). The dataset possesses quality control, with garbled and low-quality images removed. The diagnosis was given by two specialist physicians and checked by a third one in order to minimize errors . In Figure 2 it is possible to analyze how the dataset was divided into training and validation and also the number of images in each class can be analyzed. The first two columns represent the training and test division, the blue column represents the amount of training with 70% of the images, while in orange the amount of test images with 30% is presented. In the last two columns is represented the number of images for each class, in blue is represented the Normal class with 27% of the images, while in orange is represented the Pneumonia class with 73% of the images. Pipeline Method's The diagram illustrated in Figure 3 shows the method's main constituent parts. Is comprised four main modules: a) a pre-trained model; b) a transfer learning process in which x-ray lung images are trained; c) quantization through the TensorFlow Lite (Hubara, Courbariaux, Soudry, El-Yaniv, & Bengio, 2017) which aims at optimizing the model for the mobile application and d) the android app for the final classification of x-ray images. Research, Society and Development, v. 9, n. 10, e889108382, 2020 (CC BY 4. Transfer learning Transfer Learning is a common trend in Deep Learning which aims at storing knowledge gained while solving one problem and applying it to a different but related problem. It is present in many applications such as: (Abidin, et al., 2018) (Douarre, Schielein, Frindel, Gerth, & Rousseau, 2018) (Khatami, et al., 2018) (Baltruschat, Nickisch, Grass, Knopp, & Saalbach, 2018) (Chen, Dou, Chen, & Heng, 2018). The technique consists in using a pre-trained model with distinct classes of the problem to be solved (Wu, Qin, Pan, & Yuan, 2018), this becomes an advantage in the use of small data sets (Shallu & Mehra, 2018) because there is a difficulty in getting large enough sets of data for specific problems (Ramalingam & Garzia, 2018), making it has to train complex models such as: VGG19, Xception, Inception V3, among others. Transfer learning normally preserves the initial and intermediates layers, while the final layer is replaced and trained again (Ramalingam & Garzia, 2018). Figure 4 illustrates the transfer learning process. For the training of neural networks, all weights are defined as non-trainable, since they were trained with the ImageNet data set. Hence, the last layer of the networks is removed and four dense layers are added, with the latter having the same number of neurons as the number of classes to be classified. The SoftMax function is used to activate the last layer of the networks in Figure 4. Research, Society and Development, v. 9, n. 10, e889108382, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i10.8382 Quantized Neural Networks Quantized Neural Networks (QNNs) use low accuracy weights and activations. These networks are trained from scratch in an arbitrary fixed-point precision. Where in iso-precision, QNNs that use fewer bits require deeper and wider network architectures than networks that use more accurate operators, while requiring less complex arithmetic and fewer bits per weight (Moons, Goetschalckx, Van Berckelaer, & Verhelst, 2017). A method was introduced to train quantized neural networks (QNNs) with weights and activations of extremely low precision (for example, 1 bit) at runtime. During the training stage, quantized weights and activations are used to calculate the parameter gradients. During the next steps, QNNs dramatically reduce memory size and access, replacing most arithmetic operations with bit-by-bit operations (Hubara, Courbariaux, Soudry, El-Yaniv, & Bengio, 2017). A quantization scheme that allows inference to be performed using integer-only arithmetic was proposed in (Jacob, et al., 2018). It can be implemented more efficiently than floating-point inference in commonly available hardware-only integers. In our approach the weights of an existing trained model are loaded and adjusted for quantization. We used the pre-trained meshes MobileNetV1 and MobileNetV2. After being trained with the images of Pneumonia, quantization of the TensorFlow Lite was applied. Results are given in Table 1. The quantization scheme is an integer mapping q for real numbers r, that is, of the form (Jacob, et al., 2018): This scheme consists in the multiplication of two square arrays × of real numbers, 1 e 2 with its product represented by 3 = 1 2 . We denote the entries of each of these matrices ( = 1,2,∨ 3) as ( , ) for , , and the quantization parameters with which they are quantified as ( , ). We denote the inputs quantized by ( , ) . Then, Equation 1 become becomes: Table 2 shows how the resulting models were fully quantized. We still keep the float input and output for convenience. MobileNetV1 This network features a class of efficient models called MobileNets for mobile and integrated vision applications. MobileNets are based on a simplified architecture that uses separable convolutions in depth to build light, deep neural networks. Where two simples global hyperparameters are introduced that switch efficiently between latency and precision. These hyper-parameters allow the model builder to choose the correct size model for their application based on constraints of the problem (Howard, et al., 2017). The MobileNet model is based on depth-separable convolutions, which are forms of factorized convolutions that factorize a standard convolution into a convolution in depth and a Research, Society and Development, v. 9, n. 10, e889108382, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i10.8382 convolution of 1 × 1 called convolution point. For MobileNets, deep convolution applies a single filter to each input channel. The point convolution then applies a convolution of 1 × 1 to combine the convolution outputs in depth. Then the depth convolution with one filter per input channel can be written in Equation 5 (Howard, et al., 2017). MobileNetV2 This network drives the state of the art for mobile-oriented computing vision models, significantly reducing the number of operations and memory required, while maintaining the same accuracy. The main contribution is a new layer module: the inverted waste with linear bottleneck. This module takes as input a compressed low-dimension representation that is first expanded to high dimension and filtered with a deep, light convolution (Sandler, Howard, Zhu, Zhmoginov, & Chen, 2018). Mobile Application Development The method chosen in our work uses the Java API of TensorFlow Lite (Jacob, et al., 2018), suitable for Android and IOS application development. TensorFlow Lite is TensorFlow's solution for lightweight models for mobile and embedded devices which allows to run a trained model on a mobile device. It also makes use of hardware acceleration on Android with the Machine Learning APIs (see Figure 5). Research, Society and Development, v. 9, n. 10, e889108382, 2020 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v9i10.8382 This case the application was developed for the Android platform, which ranks thoracic images. The goal is to aid in the rapid and accurate diagnosis of Childhood Pneumonia. For this is developed a simple and intuitive interface, which consists of two functionalities, the first is: the option to search figure 6 which consists of loading an image present on the device, the second one is the sorting option, where the most likely classification for the image is displayed. Results and Discussion In this section we present the results obtained in each stage of the development of this paper. We provide a comparison between the pre-trained networks MobileNetV1 and MobileNetV2, with a Batch Size parameter set to 30 and 40, respectively. Both networks employ Adam as optimizer, 100 epochs in each training and a rate of 0.0001 learning rate. MobileNetV1 took 150 minutes to be fully trained, while MobileNetV2 spent 200 minutes to complete. Evaluation Metrics The model precision can be estimated by Equation 6 in which is the sum of the differences between the actual value and the expected value ^. This allow us to infer the generalization capacity of the network. =1 As a statistical tool, we have the confusion matrix that provides the basis for describing the accuracy of the classification as well as characterizing the errors, helping refine the accuracy (Saraiva, et al., 2018). The confusion matrix is formed by an array of squares of numbers arranged in rows and columns that express the number of sample units of a particular category, inferred by a decision rule, compared to the current category. The measures derived from the confusion matrix are: total accuracy (used in this work), individual class precision, producer precision, user precision and Kappa index, among others. The total accuracy is calculated by dividing the sum of the main diagonal of the error matrix , by the total number of samples collected , according to Equation 7: Equation 7 = ∑ 1 . To fully evaluate the effectiveness of the models, precision and recall are examined. Unfortunately, precision and recall are often in tension. That is, improving precision usually reduces recall and vice-versa. Equation 9 = + F1 Score is a simple metric, which takes both Precision and Recall into account, so you can try to maximize that number to improve your model. This is simply the harmonic mean of precision and recall. Equation 10 1 = 2 * * + AUC -ROC Curve is a measure of performance for sorting problems in various threshold settings. ROC is a probability curve and AUC represent the degree or measure of separability (Bowers & Zhou, 2019). Results Before quantization, data amounts for 70.4 MB of storage in MobileNetV1. However, after quantization the size decreased considerably, reaching 23.3 MB. Likewise, in MobileNetV2, the initial size before quantization was 80.1 MB. Following the same procedure applied to MobileNetV1, data was reduced to 25.0 MB (see Table 1). This significant decrease in the model size is crucial for the development of the proposed mobile application as it also allows a crucial reduction in the computational cost necessary for the application to work on a mobile device. Compared with InceptionV3, used in the work of presented an accuracy of 92.8%. This is a strong indication of the benefits of data quantization compression from pre-trained neural networks applied in the area of image classification. Results are presented in Table 4: Research, Society and Development, v. 9, n. 10, e889108382, 2020 (CC BY 4. Moreover, these preliminary results encouraged us think of an efficient Android application, with a simple and intuitive user interface, capable of performing thoracic images classification for normal and pneumonia breast images. We aim at ease of use, mobility, accuracy of classification under low computational cost and energy constraints. The Figures 6 and 7 show the mobile application interface model used in this paper, which demonstrates the efficiency of each pre-trained network used. In the tests performed, MobileNetV1 stands out over MobileNetV2, achieving an improvement of 2.5% and 3.1% in the Normal and Pneumonia class, respectively. Figure 8 illustrate the training history of the proposed networks. It can be seen that the test accuracy of both models during the training is much larger than the training accuracy. Hence, it is possible to perceive the generalization power of the models when they are tested. Conclusion This paper proposed a mobile application for the classification of x-ray images comprising normal and diseased images (pneumonia). We employed two pre-trained neural networks, MobileNetV1 and MobileNetV2, with learning transfer strategies together with quantization technique. We showed that the compression, result of the quantization process on both MobileNetV1 and MobileNetV2 led to a substantial reduction in amount of data to be processed and, therefore, the possibility to efficiently run the classification process on a mobile device. The mobile application also presents a simple and intuitive user interface and is capable of classifying thoracic images into either normal and abnormal (pneumonia) with an accuracy up to 95.4% and 94.2% for MobileNetV1 and MobileNetV2, respectively. This is an improvement over a similar method with 92.8% accuracy. As future work, it is intended to carry out a classification with more classes, classifying the type of Pneumonia, which may be viral, bacterial or viral caused by Covid19.
3,666.6
2020-09-19T00:00:00.000
[ "Computer Science", "Medicine" ]
PROBLEM SOLVING IN THE CONTEXT OF COMPUTATIONAL THINKING ion students can decide on an object to use or reject, can be interpreted to separate important information from information that is not used Generalization the ability to formulated a solution into general form so that can be applied to different problems, can be interpreted as the use of variables in resolving solutions Decomposition the ability to break complex problems into simpler ones that are easier to understand and solve Algorithmic the ability to design step by step an operation/action how the problems are solved Debugging the ability to identify, dispose of, and correct errors 3. RESULTS AND DISCUSSION INTRODUCTION In the current era of globalization in the 21st century, digital technology plays an important role in everyday life.In response to the increasing demand to compete in the global economy, countries need to prepare students with appropriate technical knowledge and communication skills to compete (Tsai & Tsai, 2017).Combining knowledge and technology is a solution to a problem that will become a trend (Voskoglou & Buckley, 2012).One step in dealing with this is to include computational thinking into the curriculum (Bower, Wood, Howe, & Lister, 2017;Weintrop et al., 2016;Voogt, Fisser, Good, Mishra, & Yadav, 2015;Geary, Saults, Liu, & Hoard, 2000).However, this has not been done in Indonesia. Computational thinking is a basic ability for students in education, which is the same basis as the ability to read, write and arithmetic calculations (Zhong, Wang, Chen, & Li, 2016;Hu, 2011).Learning by using computational thinking, as a basic skill throughout the school curriculum, will enable students to learn abstract thinking, algorithmic and logical, as well as ready to solve complex and open problems.Supported by Adler & Kim (2017) who said that honing computational thinking would be beneficial in education, and beneficial for their future.Computational thinking is everyone's basic ability to learn which is an important preparation for the future to educate young people with computational thinking.Activity-based learning strategies are strategies to help young people's cognitive growth, and can guide their learning effectively through manipulation and real expressions.(Cho & Lee, 2017).Computational thinking is considered an important competency because students currently not only work in fields affected by computing, but also need to face computing in their daily lives and in today's global economy (Bower et al., 2017;Grover & Pea, 2013). One of the subjects in the school curriculum is mathematics, so it does not rule out the possibility that applying computational thinking in mathematics can improve students' conceptual mathematics.Mathematics requires learning activities that provide direct experience to encourage problem solving skills (Sung, Ahn, & Black, 2017).Computational thinking and learning mathematics have reciprocal relationships, using computing to enrich mathematics and science learning, and apply the context of mathematics and science to enrich computational learning (Weintrop et al., 2016). The main motivation for introducing the practice of CT (Computational Thinking) into the mathematics classroom is in response to increasingly computerized disciplines because they are practiced in the professional world (Acharya, 2016).Mathematical ability is considered a core factor that predicts students' ability to learn (Grover & Pea, 2013).Some researchers put forward convincing arguments that mathematical thinking plays an important role in CT (Gadanidis, 2017;Rambally, 2017;Son & Lee, 2016) because solving math problems is a construction process (Benakli, Kostadinov, Satyanarayana, & Singh, 2017;Lockwood, DeJarnette, Asay, & Thomas, 2016;Merle, 2016).The construction process to complete this solution requires an analytical perspective to solve unique and fundamental problems for students.Based on the results of previous studies, computational thinking can improve the mastery of material number sense and arithmetic abilities (Hartnett, 2015) which is influenced by thinking style, academic success and attitude towards mathematics (Durak & Saritepeci, 2017).In addition, computational thinking can also be influenced by the level of class and the duration of ownership of mobile technology (Korucu, Gencturk, & Gundogdu, 2017).Cognitive habits that can assist in the development of computational thinking are spatial reasoning and intelligence (Ambrosio, Almeida, Macedo, & Franco, 2014;Yasar, Maliekal, Veronesi, & Little, 2017). Problems have an important role in mathematics.Most of the learning in school is designed in such a way based on mathematical problems (Reiss & Törner, 2007).During this time in solving students' math problems more on solving the problem.Solving the problems that are often reviewed are steps from Polya including the problem identification stage, planning problems, implementing the plan, and checking the answers (Reiss & Törner, 2007).In addition, computational thinking also has a role in solving mathematics, so it needs to be revealed how to solve mathematical problems in the context of computational thinking. METHOD This research is a qualitative descriptive research with the respondent is 30 of mathematics education students at Universitas Negeri Malang.The characteristics of the subject is mathematics education students who have been finished graph subject.The instrument used is one math problem consisting of problem solving question.The technique used in the determination of the respondent is the method of random sampling, because this research want to know the relationship of Polya problem solving and computational thinking.All of students do the Polya problem solving on solve the mathematics problem.There are five stages in this study.First, giving problem solving question to respondent and asking the respondent to do it.The question is "map can be easily represented by graph.A country symbolized by a vertex and edge (line between two vertexes) describes two neighboring countries on graph.The picture below represents a map into the graph.Specify an appropriate map for the given graph!" (Figure 1) The second stage, observing.Researchers recorded directly respondent and also by recording directly any activity of research respondents when solving problem solving question based on the observation sheet to classify the tendency of computational thinking.Observations focused on behavioral trends in performing computational thinking during problem solving task.The third stage, analyzing the components of computational thinking that appear on respondent of research based on the results of direct observation.The results of the analysis in the form of conclusions about the behavior of research respondents whether the responden to do computational thinking or not.Fourth, perform triangulation of data to confirm the results of the analysis is the conclusion whether the responden to do computational thinking or not by conducting an in-depth interview (in-deep interview).Interview guidelines used are with a structured and open format.In addition to interviews, there is also a data reduction stage that is not required after in-depth interviews.Finally, summarizes the results of the analysis of the components of computational thinking of prospective mathematics teachers based on the results of observation and interviews so that data can be obtained by a computational thinking of the students of mathematics education in solving the problem.The results obtained at the last stage is the classification of computational thinking of prospective mathematics teachers when solving the problem.The indicator of computational thinking when solving the problem can be viewed on the Table 1. The component of computational thinking Students activity Abstraction students can decide on an object to use or reject, can be interpreted to separate important information from information that is not used Generalization the ability to formulated a solution into general form so that can be applied to different problems, can be interpreted as the use of variables in resolving solutions Decomposition the ability to break complex problems into simpler ones that are easier to understand and solve Algorithmic the ability to design step by step an operation/action how the problems are solved Debugging the ability to identify, dispose of, and correct errors Results The results show that the students can solve the problem with computational thinking components.The responden need about five minutes to read the problem.Responden known that map 1 and map 2 didn't correct answer.This step responden did decomposition step, because responden break the map from 4 into 2 maps.If in this stage of problem solving, enter the stage define the problem.The respondent draws vertices and edges according to the problem, gives a symbol on each vertex, separating any letters connected to two countries, three countries and so on.This process/stage can be called the abstraction stage because the respondent can separate important information that can be used.The way to do that is by identifying each map (map 3 and 4) which is in accordance with the graph drawing that he made earlier.This stage can be said to be the generalization stage, because the respondent can make a general form which in this context is a graph on a question that has been given a symbol.This process can be called planning the use of strategies in problem solving. When working on, the respondent realizes that there is an error he made that there is a writing error that is "connected" is replaced with "neighbor".This stage can be called the debugging stage, because the respondent corrects the error.In working on map 3, the respondent draws a map and gives a symbol to each country, the same as repeat as in the initial example.After finding the answer, namely map 3. then the respondent checks the map 4. the respondent draws map 4 and gives the symbol the same as the previous way.Fear of map 4 is also true because questions are not multiple choice questions.At this stage, it can be said that respondents carried out an algorithmic stage.When viewed from the side of problem solving this stage enters the implementation phase of the plan / problem solving strategy while checking the answers.The answer of responden can be viewed on the Figure 2. Figure 2. answer one of respondent Figure 2 show that computational thinking student on solving mathematics problem especially graph.First, the student did the abstraction, then decomposition, debugging, generalization, and the last algorithmic. Discussion Previous studies discussed about what it might mean and what we might do about computational thinking (Hu, 2011), problem solving in the mathematics classroom in Germany (Voskoglou & Buckley, 2012;Voskoglou, 2013) implications for teacher knowledge in K-6 computational thinking curriculum framework (Angeli et al., 2016), the possibility of improving computational thinking through activity based learning (Cho & Lee, 2017), a framework of curriculum design for computational thinking development in K-12 education (Kong, 2016).The level of participants' computational thinking skills differed significantly in terms of their grade level, not significantly different in terms of their gender (Korucu et al., 2017). The steps taken by respondents are first decomposition, abstraction, generalization, debugging and algorithmic.These steps do not match the order of computational thinking indicators.This is in line with the results of research conducted by Voskoglou & Buckley (2012) which states that the sequence of problem solving steps seen from computational thinking does not have to be in order.When performing the decomposition and abstraction stage, the respondent understands the problem by reading the questions carefully for five minutes, and determining that maps 1 and 2 do not fulfill the reasons.This means that respondents have understood what was asked about the problem and identified the reasonable parts (Reiss & Törner, 2007).Next step, the generalization stage of the respondent can make a general form which in this context is a graph on the question that has been given a symbol.At this stage if viewed in terms of problem solving can enter the planning process of the solution stage because the respondent tries to make his own formula to complete, it is a strategy to solve.The respondent identifying auxiliary problems, changing the formulation, or checking the relevance of the data (Reiss & Törner, 2007). Respondents did debugging while the work takes place, before doing algorithmic respondents have debugged.Then the respondent performs an algorithmic process that is completing map 3 and map 4 according to the general form that was made earlier.In this process the respondent also checked the map 4 even though he had found an answer namely map 3.In this case the respondent did the stage carrying out the plan and the problem solver to look back and to evaluate the solution.This means to check every single part of the solution and to make sure (or, preferably, to prove) that it is correct and to show that it is correct and all arguments are valid (Reiss & Törner, 2007). In general, computational thinking is a problem solving who not only on information technology but on mathematics education too.Students who use the computational thinking on solving the mathematics problem would be easy to solve other mathematics problems. CONCLUSION The relationship between problem-solving and computational thinking of respondent when solving the problem is when defining the problem in the context of problem-solving, the respondent performs the stage of decomposition and abstraction in the context of computational thinking.During the planning process of the solution process, respondents carried out the generalization stage.When the scene is carrying out the plan and the problem solver to look back to evaluate the solution, the respondent performs the debugging and algorithmic steps. Computational thinking supported students to solve the mathematics problem.The development of computational thinking was needed for future research that will be affected to learning especially mathematics learning.For example, the assessment of computational thinking, the characteristic of computational thinking, the expansive of each component of computational thinking and others.
3,012.8
2019-09-30T00:00:00.000
[ "Computer Science", "Mathematics", "Education" ]
Novel IMB16-4 Compound Loaded into Silica Nanoparticles Exhibits Enhanced Oral Bioavailability and Increased Anti-Liver Fibrosis In Vitro Background: Liver fibrosis, as a common and refractory disease, is challenging to treat due to the lack of effective agents worldwide. Recently, we have developed a novel compound, N-(3,4,5-trichlorophenyl)-2(3-nitrobenzenesulfonamide) benzamide (IMB16-4), which is expected to have good potential effects against liver fibrosis. However, IMB16-4 is water-insoluble and has very low bioavailability. Methods: Mesoporous silica nanoparticles (MSNs) were selected as drug carriers for the purpose of increasing the dissolution of IMB16-4, as well as improving its oral bioavailability and inhibiting liver fibrosis. The physical states of IMB16-4 and IMB16-4-MSNs were investigated using nitrogen adsorption, thermogravimetric analysis (TGA), HPLC, UV-Vis, X-ray diffraction (XRD) and differential scanning calorimetry (DSC). Results: The results show that MSNs enhanced the dissolution rate of IMB16-4 significantly. IMB16-4-MSNs reduced cytotoxicity at high concentrations of IMB16-4 on human hepatic stellate cells LX-2 cells and improved oral bioavailability up to 530% compared with raw IMB16-4 on Sprague–Dawley (SD) rats. In addition, IMB16-4-MSNs repressed hepatic fibrogenesis by decreasing the expression of hepatic fibrogenic markers, including α-smooth muscle actin (α-SMA), transforming growth factor-beta (TGF-β1) and matrix metalloproteinase-2 (MMP2) in LX-2 cells. Conclusions: These results provided powerful information on the use of IMB16-4-MSNs for the treatment of liver fibrosis in the future. Mesoporous silica nanoparticles (MSNs) have attracted great attention because of their special features such as their unique porous structure, large surface area, pore volume and strong absorbability [21][22][23][24][25]. All these features allow better control of drug loading and release. Poorly water-soluble drug molecules are loaded into small mesopores and exist in noncrystalline form by decreasing the Gibbs free energy of the system [26]. This noncrystalline form can be rapidly released from mesopores and generate supersaturated solutions on a silica surface [27]. This ability of silica nanoparticles to increase the drug dissolution rate plays an important role in enhancing oral bioavailability [28]. In addition, numerous factors including particle size, pore diameter, pore length and modification of the surface could affect drug release from MSMs, leading to drug release occurring within a few minutes or within days [29]. Furthermore, MSNs can protect drugs from hydrolysis, oxidation, or degradation processes due to the physicochemical stability of the silica matrix [30]. In this paper, we report for the first time that MSNs were used as the carrier for IMB16-4 to increase the dissolution rate, improve bioavailability and inhibit liver fibrosis effects. MSNs with relatively large pore diameter and short pore channel length were applied to improve the dissolution rate and achieve controlled release. The effects of MSNs on the uptake and release of IMB16-4 were systematically studied using SEM, TEM, N 2 adsorption, XRD, differential scanning calorimetry (DSC), TGA and HPLC. In vivo pharmacokinetic studies were conducted to confirm the enhancement of oral bioavailability. In addition, the anti-fibrotic effects of IMB16-4 loaded into MSNs was also explored in vitro on the human HSC line LX-2 cells by testing TGF-β1, α-SMA and MMP2 protein activity. Morphology of MSNs and IMB16-4 MSNs The morphology and particle size of MSNs and IMB16-4-MSNs were analyzed by SEM and TEM. As shown in Figure 1A,B, MSNs had a nearly monodispersed spherical shape with a size of about 60 nm. The mean pore sizes of MSNs were approximately 8 nm and were both on the particle surface and within the particle. As seen in Figure 1C,D, the pores of MSNs were partly blocked by IMB16-4 ( Figure 1C). Obviously, the channels of pores were seriously blocked when the mass ratio of IMB16-4 and MSNs was 1:1 ( Figure 1D). Estimation of the Brunauer-Emmett-Teller (BET) Specific Surface Area from Nitrogen Adsorption Studies The nitrogen adsorption/desorption isotherms of samples are presented in Figure 2. The values for the BET specific surface area (S BET ), the total pore volume (Vt) and the Barrett-Joyner-Halenda (BJH) pore diameter (w BJH ) are given in Table 1. MSNs possess high S BET , Vt and large pore diameter, which indicate its ability to store small agents [21]. After being loaded with IMB16-4, S BET , Vt and w BJH were drastically reduced. This was due to the fact that IMB16-4 was loaded into the pores. Quantification of IMB16-4 Uptake by TGA and HPLC Analysis In order to achieve the maximum drug loading, MSNs were soaked in DMSO solution during drug loading [31]. Then, DMSO was removed by vacuum drying at 80 • C and 178 • C, successively. Afterwards, the amount of drug loading was quantified by TGA and HPLC, respectively. In the TGA measurement, drug loading was obtained by a temperature-dependent weight reduction. MSNs showed good thermostability and the weight reduction was mainly attributed to the IMB16-4. The weight lossresidual rates of raw IMB16-4, IMB16-4-MSNs (1:1) and IMB16-4-MSNs (1:2) were 62.7%, 32.8% and25.8%, respectively ( Figure 3A). The drug loading was 41.1% and 52.3% for IMB16-4-MSNs (1:2) and IMB16-4-MSNs (1:1), respectively, which was calculated by the ratio of the weight loss of IMB16-4-MSNs to the weight loss of raw IMB16-4. It is noted that TGA measurement is not sufficiently accurate to quantify the total drug content of the sample. Nevertheless, together with HPLC, TGA is an important method for detecting drug loading. As shown in Figure 3B, a strong absorption peak at 258 nm was observed, which was attributed to IMB16-4. MSNs did not affect absorption at 258 nm. Therefore, 258 nm was selected as the detection wavelength of drug loading. The drug uptake values obtained by HPLC are shown in Table 1. Solid State Characterization Using DSC and XRD Studies The crystalline form can be estimated by DSC analysis when melting point depression appears. If the compound is present in a noncrystalline state, no melting point depression can be detected [32]. As shown in Figure 4, the DSC curve of IMB16-4 exhibited a single endothermic peak at 253 • C ≈ 258 • C, which corresponded to its intrinsic melting point. The melting point depression of the physical mixture also appeared. However, no melting peak was observed in the IMB16-4-MSNs (the mass ratio of IMB16-4 and MSNs was 1:2.), indicating the absence of a noncrystalline state. These assumptions could be further confirmed by the results of XRD study. The XRD patterns of the IMB16-4-MSNs samples were recorded to determine whether a crystalline IMB16-4 phase could be detected. As shown in Figure 5, the diffraction pattern of raw IMB16-4 was highly crystalline, as indicated by the numerous peaks. For the physical mixture, peaks were attributed to samples of pure IMB16-4. However, the diffraction profile of IMB16-4-MSNs (the mass ratio of IMB16-4 and MSNs was 1:2) showed no peaks. It is known that the absence of distinctive peaks indicates that the IMB16-4 loaded into the pores of MSNs exists in a noncrystalline state. In contrast, slight peaks of IMB16-4-MSNs (the mass ratio of IMB16-4 and MSNs was 1:1) were observed, indicating that MSNs reached the limit of suppressing crystallization. Effects of MSNs on IMB16-4 Release Behavior As shown in Figure 6, the observed dissolution rate of the raw IMB16-4 was quite low and the amount of dissolved IMB16-4 in the release medium was about 40% at 12 h. However, the amounts of dissolved IMB16-4 from IMB16-4-MSNs (1:2) in pH6.8 phosphate buffer solutions at sampling times of 1 h, 6 h and 12 h accumulated to 32.4%, 56.3% and 66.8%, respectively. The corresponding amounts were 16.9%, 35.1% and 44.7% for IMB16-4-MSNs (1:1). Remarkably, the dissolution rate of IMB16-4 released from IMB16-4-MSNs was faster compared with that of raw IMB16-4, especially as the mass ratio of IMB16-4 and MSNs was 1:2. This dissolution improvement may be mainly attributed to the large pores of MSNs maintaining nanoscale IMB16-4 and transforming the crystalline state of IMB16-4 to a noncrystalline state, which is known to improve the drug dissolution rate. It can also be seen that the release profiles of all IMB16-4-MSN samples were of the same unique type with a sustained release of IMB16-4 both in pH6.8 and pH 1.0 release medium. The initial bursts of IMB16-4 release were attributed to the presence of IMB16-4 in the external pores and near the holes of the MSNs, which allows a certain amount of IMB16-4 to be released quickly into the release medium and satisfies the need for immediate treatment after administration. The release rate then became slower, due to the slow dissolution of IMB16-4 from the pores inside the particles. The diffusion of solvent into the small mesopores and the counter diffusion of IMB16-4 out of the mesopore channels delayed the release of IMB16-4. In Vitro Antifibrotic Effects LX-2 cells were stimulated with TGFβ1 protein (2 ng/mL) and then treated with raw IMB16-4 and IMB16-4-MSNs, respectively. In vitro antifibrotic effects were evaluated by the expression of hepatic fibrogenic markers, including α-SMA, TGF-β1 and MMP2 using Western blot. After stimulation with TGFβ1, the expression of hepatic fibrogenic markers was increased. Along with the IMB16-4 treatment, the regulated expression of all hepatic fibrogenic markers is shown in Figure 8. IMB16-4-MSNs significantly decreased the protein levels of α-SMA, TGF-β1 and MMP2 on LX-2 cells. The antifibrotic effects of IMB16-4-MSNs were stronger than those of raw IMB16-4, indicating that IMB16-4-MSNs inhibited liver fibrosis effects. After being dissolved by DMSO, IMB16-4 showed potential anti-liver fibrosis effects. However, the differences are not significant. It can be assumed that IMB16-4 in DMSO was separated out in a serum-free culture. IMB16-4, in sterile water, possessed a poor inhibition effect, attributed to poor solubility (43.2 ± 9.1 ng/mL) and far from the concentration required to show anti-liver fibrosis effects. Overall, IMB16-4-MSNs increased inhibition of liver fibrosis effects, owing to the MSNs reducing the crystallite size, increasing dispersibility and enhancing solubility. MSNs Increase IMB16-4 Absorption In Vivo The in vivo bioavailability of IMB16-4 from IMB16-4-MSNs (1:2) was assessed with raw IMB16-4 as a control. As shown in Figure 9 and Table 2, the Cmax and AUC 0≈12 h values of IMB16-4-MSNs (1:2) after intragastric administration were increased nearly 5fold compared with raw IMB16-4. The plasma concentrations of IMB16-4 were greatly promoted by the MSNs. The Cmax values for IMB16-4-MSNs and raw IMB16-4 were 0.89 ± 0.22 and 0.18 ± 0.04 mg/L, respectively. The AUC 0≈12 h value for IMB16-4-MSNs was increased about 5.3-fold compared with that of raw IMB16-4. Therefore, clearly, IMB16-4-MSNs significantly improved the in vivo adsorption of IMB16-4. The mean Tmax values for IMB16-4-MSNs and raw IMB16-4 were 4.14 h and 3.43, respectively. The large pore diameter, short pore channel, protective effect and nondegradable nature of MSNs may make the IMB16-4 release behavior as effective in vivo as that in vitro. In addition, several factors attributed to the improvement of oral bioavailability, including greater dissolution rate of IMB16-4 owing to its noncrystalline state, reduced crystallite size and increased dispersibility. Sprague-Dawley (SD) rats were supplied by HFK Biotechnology Co. Ltd. (Beijing, China). All animal experiments were approved by the local IACUC (Institutional Animal Care and Use Committee, Beijing, China) and performed in accordance with the Institutional Review Board for Laboratory Animal Care (IMB-2020121406D6). Preparation of MSNs MSNs were synthesized as reported [33,34]. In total, 600 mg of CTAB was dissolved in 192 mL of deionized water at 60 • C under vigorous stirring in a three-necked flask reactor for 30 min. Then, 152 mL of octane, 132 mg of L-lysine, 289 mg of AIBA, 6.4 mL of TEOS and 21 mL of styrene monomer were added to the system in sequence. The reaction was kept for 4 h under nitrogen at 60 • C with constant stirring. Afterwards, the resulting product was cooled to room temperature over one night and purified by centrifugation at a rate of 15,000 rpm. Then the precipitate was washed with ethanol. After centrifugation, the precipitate was heated at 600 • C for 3 h under atmospheric conditions to remove organic template. In total, 10 mL DMSO solution of IMB16-4 (30 mg/mL) was dropped to 300 and 600 mg MSNs, separately. The corresponding mass ratio of IMB16-4 and MSNs was 1:1 and 1:2. The mixture was gently stirred and then DMSO was evaporated by a rotary evaporator at 80 • C. The residual DMSO was removed at 178 • C under the vacuum. The resulting samples were referred to as IMB16-4-MSNs. Equilibrium Concentration Study An excess amount of raw IMB16-4 was added into 5 mL of distilled water at 25 ± 2 • C. The raw IMB16-4 solution at equilibrium time (about 24 h) was withdrawn and filtered using a 0.22 µm membrane. The subsequent filtrate was diluted with internal standard (4 -Chloroacetanilide) acetonitrile solution and analyzed by HPLC-MS/MS analysis (Thermo LTQ XL, USA). The standard curve for IMB16-4 was the linear (R 2 > 0.992) over the concentration range of 0.3~1200.0 ng/mL. The quantitative ion pair of IMB16-4 and internal standard quantitative were m/z = 499.9/196.0 and m/z = 168.04/126.1, respectively. Sample Characterization The porous structure, morphology and particle size of MSNs and IMB16-4-MSNs were evaluated using a TEM (JEM1200EX, JEOL, Japan) and an SEM (SU8020, HITACHI, Japan). A very low concentration of samples was dispersed in water under ultrasonication, then dropped onto gold-plated and carbon-coated copper grids, respectively. The pore characteristics of the samples were studied by determining the nitrogen adsorption using a surface area and pore size analyzer (ASAP 2460, micromeritics, USA). The samples were outgassed at 150 • C for 6 h prior to analysis. The pore characteristics were determined according to the BET and BJH procedures from the desorption branches of the isotherms. The physical state of IMB16-4 was evaluated using an X-ray diffractometer (Brucker D8 Advance, Germany). Data were obtained from 5 • to 40 • (diffraction angle 2θ) at a step size of 0.02 • and a scanning speed of 4 • /min radiation. DSC analysis of the samples was examined by differential scanning calorimetry (DSC 1, Mettler, Switzerland). The samples were heated over a temperature range between 50 and 300 • C at a rate of 10 • C/min under a nitrogen purge of 40 mL/min. The drug loading was examined by TGA (TGA/DSC 1, Mettler, and Switzerland). The samples were heated over a temperature range between 40 and 900 • C at a rate of 10 • C/min under a nitrogen purge of 50 mL/min. HPLC Analysis The samples were analyzed by HPLC (Nexera-i LC-2040C 3D, Shimadzu, Japan). Analysis was carried out on a Shim-pack GIST C18 column (50 × 2.1 mm, 2 µm, Shimadzu). The mobile phase consisted of an 80:20 (% v/v) mixture of methanol and pH 2.0 phosphoric acid solutions, the flow rate was 0.3 mL/min and the detection wavelength was 258 nm. The column temperature was 25 • C. The injection volume was 5 µL. The retention times are about 3.07 min for IMB16-4. The samples were filtered using a 0.22 µm membrane filter before running the HPLC analysis. Drug loading (%) = (weight of IMB16-4 in samples/weight of samples) × 100. In Vitro Dissolution Dissolution studies were conducted using a USP II paddle method (100 rpm, 37 • C, and 900 mL dissolution medium) with a dissolution tester (ZRS-8LD, China). The release of IMB16-4 and IMB16-4-MSNs was performed in pH 1.0 hydrochloric acid solution and pH 6.8 phosphate buffer solutions, containing 3% (w/v) SDS, respectively. All release studies were carried out in triplicate. At predetermined time intervals, 5 mL of sample solution was withdrawn from the release medium and filtered using a 0.22 µm membrane filter before running the HPLC analysis. An equivalent amount of fresh medium was added to maintain a constant dissolution volume. In Vitro Cytotoxicity The human HSC line LX-2 was obtained from Pro He [35]. LX-2 cells were cultured in dulbecco's modified eagle medium (DMEM)/GlutaMAX I (Invitrogen, USA) with 10% fetal bovine serum and 1% penicillin/streptomycin at 37 • C in an atmosphere of 5% CO 2 . The cell suspension was seeded into 96-well plates at 100 µL per well and incubated for 24 h. Then, MSNs, IMB16-4 and IMB16-4-MSNs suspensions containing different concentrations were added to 96-well plates at 100 µL per well and incubated for 24 h. Then, 10 µL CCK8 solution was added to 96-well plates and incubated for 2 h. Finally, the absorbance was determined at 450 nm by an optical microscope (BioTek, SYNERGYH1, USA). The cell survival rate was calculated according to formula: Cell survival rate (%) = Absorbance of sample/Absorbance of control×100. The Antifibrotic Effects on the Human HSC Line LX-2 Cells After ultraviolet sterilization, raw IMB16-4, IMB16-4-MSNs and MSNs were suspended in water, respectively. Another group of raw IMB16-4 was suspended in DMSO. The concentration of IMB16-4 was 2 mM. Then, 2µL of solution containing IMB16-4 was added into 2 mL of serum-free culture with TGFβ1. LX-2 cells were cultured as described above. LX-2 cells were seeded in a 6-well plate, cultured in DMEM/GlutaMAX I, containing 10% fetal bovine serum (FBS) in 5% CO 2 atmosphere at 37 • C. Serum-free culture was replaced until the cells reached 90 ≈ 95% confluence. After 24 h, cells were treated with TGFβ1 (2 ng/mL) and IMB16-4 (2 µM) for 24 h. Then, cells were washed with phosphate buffer saline (PBS) and protein was extracted in radio-immunoprecipitation assay (RIPA) buffer. Then mixture was centrifuged at 12,000 rpm at 4 • C for 20 min. The supernatant was collected, and the total protein was determined using the bicinchoninic acid (BCA) protein assay kit (Beyotime Biotechnology, China). Samples were applied to the 10% SDS-PAGE gel. Then, the protein bands were transferred to a polyvinylidene difluoride (PVDF) membrane (Millipore Corp, Atlanta, GA, USA) and the membrane was blocked for 1 h with 5% nonfat milk. The membrane was then incubated with desired primary antibodies overnight at 4 • C followed by horseradish peroxidase (HRP) conjugated secondary antibodies (1:10,000) at room temperature for 1 h. The protein bands were analyzed on an imager (Tanon5200, China). Pharmacokinetics Study The mice were maintained under a specific pathogen-free (SPF) environment with a 12 h light/dark cycle. SD rats (body weight 200 ± 20 g) were fasted overnight and divided into two groups. Raw IMB16-4 and IMB16-4-MSNs were given orally by gavages at a dose of 100 mg/kg, respectively. Before administering, both IMB16-4-MSNs and raw IMB16-4 were dispersed in 0.5% CMC-Na aqueous solution, respectively. Blood samples were collected from the eye socket vein at time points of 0.17, 0.5, 1, 2, 3, 4, 6, 8 and 12 h after dosing. Plasma samples were collected by centrifuging at 3000 rpm for 10 min and were then stored at −20 • C until analysis. Plasma samples were mixed with internal standard (4 -Chloroacetanilide) acetonitrile solution. Then, the mixture was vortex-mixed for 10 s. After centrifugation at 15,000 rpm for 10 min, the supernatant was used for HPLC-MS/MS analysis (AB SCIEX 6500 Qtrap, USA). The condition of the capillary voltage was −4500 V. The temperature was 500 • C. The mobile phase comprised acetonitrile and 10 mM of ammonium acetate aqueous solution. Analysis was carried out on an Xselect HSS T3 column (2.1 × 100 mm, 2.5 µm, Waters) in gradient elution. The column temperature was 40 • C and the flow rate was 0.3 mL/min. The standard curve for IMB16-4 was the linear (R 2 > 0.992) over the concentration range of 4~2048 ng/mL. The quantitative ion pairs of IMB16-4 and internal standard quantitative were m/z = 500.0/195.7 and m/z = 168.0/126.1, respectively. The pharmacokinetic parameters were obtained using statistic software DAS2.0. Statistical Analysis Each experiment was performed at least in triplicate. Statistical analysis was performed by one-way or two-way analysis of variance (ANOVA) followed by Dunnett's multiple comparison tests using SPSS 13.0 and Origin 9.1 software. Statistical significance was accepted at the level of p < 0.05. Conclusions IMB16-4, as a novel compound with potential anti-liver fibrosis effects, was loaded into MSNs to enhance oral bioavailability and improve therapeutic efficacy by changing the crystalline state of IMB16-4, regulating release from silica nanocavities. The advantages of MSNs with large pore diameter and short pore channel were major contributing factors for release. The dissolution of IMB16-4-MSNs in the release medium showed great advantages compared with that of raw IMB16-4, which accounted for the enhanced oral bioavailability and inhibited liver fibrosis effect. Overall, these results offer powerful information presenting IMB16-4-MSN as an ideal anti-liver fibrosis preparation.
4,715.2
2021-03-01T00:00:00.000
[ "Medicine", "Materials Science", "Chemistry" ]
Prospects of searching for composite resonances at the LHC and beyond Composite Higgs models predict the existence of resonances. We study in detail the collider phenomenology of both the vector and fermionic resonances, including the possibility of both of them being light and within the reach of the LHC. We present current constraints from di-boson, di-lepton resonance searches and top partner pair searches on a set of simplified benchmark models based on the minimal coset SO(5)/SO(4), and make projections for the reach of the HL-LHC. We find that the cascade decay channels for the vector resonances into top partners, or vice versa, can play an important role in the phenomenology of the models. We present a conservative estimate for their reach by using the same-sign di-lepton final states. As a simple extrapolation of our work, we also present the projected reach at the 27 TeV HE-LHC and a 100 TeV pp collider. JHEP01(2019)157 1 Introduction A promising way of addressing the naturalness problem is to consider the existence of strong dynamics around several to 10 TeV scale. The Higgs boson is a pseudo-Nambu-Goldstone boson, much like the pions in the QCD. This so-called composite Higgs scenario [1][2][3] has become a main target for the search of new physics at the Large Hadron Collider (LHC). A generic prediction of the composite Higgs scenario is the presence of composite resonances. Frequently considered resonances are either spin 1, analogous to ρ-meson in QCD, or spin 1/2 resonances with quantum numbers similar to those of the top quark, called "top partners". In this paper, we study in detail the collider phenomenology of both kinds of resonances. We focus on the minimal coset SO(5)/SO (4), denoted as the Minimal Composite Higgs Model (MCHM) [4,5]. We included several benchmark choices of both the spin 1 resonance and the top partner: ρ L (3, 1), ρ R (1, 3), ρ X (1, 1), Ψ 4 (2, 2) and Ψ 1 (1,1). We derive the current constraints, and make projections for the reach of HL-LHC. We also make a simple extrapolation to estimate the prospectives at the 27 TeV HE-LHC [6] and the 100 TeV pp collider [7][8][9][10]. Search channels in which the composite resonances are produced via Drell-Yan process and then decay into the Standard Model (SM) final states, such as di-lepton, di-jet, tt and di-boson, are well known. We update the limits by including the newest results at the 13 TeV LHC, such as the boosted di-boson jet resonance searches performed by ATLAS with integrated luminosity L = 79.8 fb −1 [11], the di-lepton resonance search at CMS with integrated luminosity L = 77.3 fb −1 for the electron channel and L = 36.3 fb −1 for the muon channel [12], and the search for the pair production of top quark partners with charge-5/3 at CMS with integrated luminosity L = 35.9 fb −1 [13]. In addition, we paid close attention to scenarios in which the spin-1 resonances and top partners can be comparable in mass. In this case, cascade decays in which one composite resonance decays into another, can play an important role [14][15][16][17][18][19]. In particular, the channels ρ + L → tB/X 5/3t or ρ + L → X 5/3X2/3 and ρ 0 L → X 5/3X5/3 can have significant branching ratios for models with quartet top partner, if ρ L is in the intermediate mass region M Ψ < M ρ < 2M Ψ or the high mass region M ρ > 2M Ψ , respectively. Such cascade decays can lead to the same-sign di-lepton (SSDL) signals. Since these are relative clean signals, which have already been used for LHC searches, we use them in our recast and estimate the prospective reach on the M ρ − M Ψ plane. They are comparable in some regions of the parameter space to the di-boson searches for the spin-1 resonances and the pair-produced top partner searches at the LHC. For the models with a singlet top partner, the cascade decay channel T → tρ X → ttt in the single production channel can play an important role in the mass region M T > M ρ X . The reach at the LHC is also estimated in the SSDL channels. The projections made based on only the SSDL channel are of course conservative. Other decay modes of the cascade decay channels mentioned above can further enhance the reach, such as the ones including more complicated final states like 1 + jets channels. We leave a detailed exploration of such additional channels for a future work. The paper is organized as follows. In section 2, we summarize the main phenomenological features of the models, including the couplings of the particles in the mass eigenstates, and the production and the decay of the resonances. The details of the models are pre--1 - JHEP01(2019)157 Particle content SO(4) SU(2) L × SU(2) R (3,1) (1,3) (1,1) (2,2) (1,1) (2,2) (1,1) (1,1) Models considered Interaction ρ L , Ψ 4 ρ R , Ψ 4 ρ X , Ψ 4 ρ X , Ψ 1 Model LP(F) 4 RP(F) 4 XP(F) 4 XP(F) 1 Table 1. Upper table: the particle content considered in this paper and their representations under the unbroken SO(4) SU(2) L × SU(2) R . The SM left-handed quarks q L = (t L , b L ) T are embedded into an incomplete representation, 5, of SO (5). We consider two possible origins of the right-handed top quark. It can be partially composite, denoted as t (P) R , and it is embedded in an incomplete representation, 5, of SO (5). It can also be a fully composite resonance, denoted as t (F) R , and it is assumed to be an SO(4) singlet massless bound state. Their representations under the unbroken SO (4) are also presented in the table. Lower table: the models with different combinations of the composite spin-1 resonances ρ and the fermionic resonances Ψ considered in our paper. P (F) denotes the partially (fully) composite right-handed top quark. sented in appendix A and appendix B. In section 3, we show the present bounds from the LHC searches and extrapolate the results to the HL-LHC with an integrated luminosity of L = 3 ab −1 . An estimate of the reach at the 27 TeV HE-LHC and 100 TeV pp collider is also included. We conclude in section 4. Phenomenology of the models We begin with a brief review of the composite Higgs models under consideration. We will describe the particle content, and give a qualitative discussion of the sizes of various couplings. The details of the models are presented in appendix A and B. We will consider models similar to those presented in ref. [14]. The strong dynamics is assumed to have a global symmetry SO (5), which is broken spontaneously to SO(4) SU(2) L × SU(2) R . The resulting Goldstone bosons, parameterizing the coset SO(5)/SO(4), contain the Higgs doublet. This is the minimal setup with a custodial SU(2) symmetry. The composite resonances furnish complete representations of SO(4). We summarize the particle content and the models considered in our paper in table 1. For the spin-1 resonances ρ, we consider three representations under the unbroken SO(4) SU(2) L × SU(2) R : ρ L (3, 1), ρ R (1, 3), ρ X (1, 1), while for the fermionic resonances Ψ, we study the quartet Ψ 4 (2, 2) and the singlet Ψ 1 (1, 1). The left handed SM fermions, q L = (t L , b L ) T , are assumed to be embedded into (incomplete) 5 representations of SO(5) (see eq. (A.21)) [5]. There are two well-studied ways of dealing with the right handed top quark. First, it can be treated as an elementary field, and embedded into a 5 representation of SO(5) (see eq. (A.22)) [5]. We call this the partially composite right-handed top quark scenario, and denote right-handed top as t (P) R . It is also possible that it is a massless bound state of the strong sector and a SO(4) singlet, denoted as t (F) R [20]. We call this the -2 - JHEP01(2019)157 fully composite right-handed top quark scenario. We will consider both of these cases. In principle, many of the composite resonances can be comparable in their masses in a given model. Rather than getting in the numerous combinations, we consider a set of simplified models in which only one kind of spin-1 resonance(s) and one kind of top partner(s) are light and relevant for collider searches. For example, model LP 4 involves the strong interactions between the ρ L and the quartet top partner Ψ 4 and the partially composite right-handed top quark. In comparison, model LF 4 is different only in the treatment of the right handed top quark which is assumed to be fully composite. In the following, we will first discuss all the most relevant interactions and their coupling strengths in section 2.1. The production and decay of the resonances at the LHC are presented in section 2.2. The mass matrices of different models and their diagonalizations are discussed in appendix C, where we also list the expressions all the mass eigenvalues. The couplings Scale f , similar to the pion decay constant in QCD, parameterizes the size of global symmetry breaking. The parameter ξ = v 2 /f 2 measures the hierarchy between the weak scale and the global symmetry breaking scale in the strong sector. It has been well constrained from LEP electroweak precision test (EWPT) and the LHC Higgs coupling measurements to be ξ 0.13 [21,22]. In the expressions for the couplings, we will keep only terms to the leading order in ξ. The interactions of the spin-1 resonances in the strong sector are characterized by several couplings, (g ρ L , g ρ R , g ρ X ), sometimes collectively denoted as g ρ . Typically, they are assumed to be much larger than the SM gauge couplings, i.e. g ρ g , g. We will keep only terms to the leading order in g/g ρ in the expressions of the couplings. 1 Similar to ref. [23], we will also introduce an O(1) parameter for each representation of the spin-1 resonances, defined as a ρ L,R,X = m ρ L,R,X g ρ L,R,X f . (2.1) In most of the cases, we will fix a ρ . The sector of fermionic composite resonances involve another strong coupling, g Ψ , defined as: For partially composite SM fermions, there are mixings between the SM fermions and the top partners before electroweak symmetry breaking (EWSB). For example, the mixing angles between the elementary left (right) handed top and the quartet (singlet) top partners (defined in eq. (B.11) and eq. (B.38)) in models within the partially composite right-handed top quark scenario are: Between heavy resonances and SM fermions: Between SM particles: Table 2. The LO coupling strengths between the charged spin-1 bosons ρ ± L,R , W ± and the fermions in models LP(F) 4 , RP(F) 4 . Note that f el denotes all the SM light fermions, including the first two generation quarks, b R and all the leptons. Here (P) and (F) mean the partially and fully composite right-handed top quark scenario, respectively. and the same definition applies to c θ L , c θ R , t θ L , t θ R . The interactions of the spin-1 resonances and the fermions are summarized in table 2 (for the charged sector) and table 3, table 4 (for the neutral sector). The couplings can be organized into four classes by their typical sizes. The first class includes the interactions generated directly from the strong dynamics and preserve the non-linearly realized SO(5) symmetry. They only involve the strong sector resonances ρ, Ψ, the pseudo-Goldstone bosons h and the fully composite right-handed top quark t (F) R . The interaction strengths are of O(g ρ ) or O(g Ψ ). Since these interactions preserve the unbroken SO(4) symmetry, the interactions between ρ and Ψ are determined by the quantum number of the fermionic resonances under the SU(2) L × SU(2) R . The symmetry selection rules permit the following interactions of O(g ρ ): Between heavy resonances: Between heavy resonances and SM fermions: Between SM particles: For model LP(F) 4 and RP(F) 4 , it reads g 4cW −T 3L − 1 2 s 2 θL ξ. Table 3. The LO coupling strengths between the neutral spin-1 bosons, ρ 0 L,R,X and Z, and the fermions in models LP(F) 4 , RP(F) 4 , and XP(F) 4 . f el denotes all the SM elementary fermions including the first two generation quarks, b R and all the leptons. Here (P) and (F) in the couplings refer to the partially and fully composite right-handed top quark scenario, respectively. c W denotes cos θ W with θ W being the weak mixing angle. Table 4. The LO coupling strengths between the neutral spin-1 gauge bosons, ρ 0 X and Z, and the fermions in models XP(F) 1 . JHEP01(2019)157 where T = T , B, X 5/3 , X 2/3 denotes the fermionic resonances in the quartet. The last term is for the case of a fully composite right-handed top quark. As will be discussed in the next subsection, these interactions dominate the decay of ρ resonances if the channels are kinematically open. For the interactions involving the ρ and the Higgs doublet H, we have (see appendix B for detail): where we have defined the SU(2) R current The Higgs doublet can be parameterized as with φ ± , χ eaten by the SM W ± , Z bosons after EWSB. By the Goldstone equivalence theorem, the interactions involve φ ± , χ will determine the couplings of longitudinal modes of W ± and Z gauge bosons at high energy, leading to the following interactions with O(g ρ ): where we have integrated by parts before turning on the Higgs vacuum expectation value (VEV) and focused only on the trilinear couplings (see ref. [20] for detail). M Q X , M Q are defined in eq. (B.12). In the limit M 4 /f y L , y 2L , these are the dominant interactions between the top partners and the SM fields. By using Goldstone equivalence theorem, we can easily derive the well-known approximate decay branching ratios for the top parnters: Taking into account the mixing effects, shown in eq. (2.3), will not modify the conclusions significantly. JHEP01(2019)157 The second class of interactions are suppressed either by the left-handed top quark mixing s θ L or the right-handed top quark mixing s θ R defined in eq. (2.3). These are the couplings of ρ to one top partner and one SM quark. These interactions preserve SM SU(2) L × U(1) Y gauge symmetries. Symmetry considerations select the following interactions: where the last term is only present for the partially composite right-handed top quark scenario. The interactions will play an important role in the kinematical region M Ψ + M t,b < M ρ < 2M Ψ , if the mixings s θ L,R are not too small. The third class of interactions contains the SM gauge interactions with couplings g, g or SM Yukawa couplings. These include the W and Z interactions with SM elementary fermions (quarks and the leptons) and the fermionic resonances; and the mixed couplings, proportional to y L,R , between the top partners Ψ 4,1 and the elementary SM quarks q L , t which leads to the same decay branching ratios as eq. (2.10) for the quartet. For the singlet top partner, this gives The fourth class contains the interactions with coupling strengths suppressed by g/g ρ , g /g ρ . These are the universal couplings between the ρ and the SM fermions, due to the mixings of ρ and SM gauge bosons which are present before the EWSB. These interactions include ρ + L,Rf el f el , ρ 0 L,R,Xfel f el , (2.14) where f el denotes all the SM elementary fermions including the first two generation quarks, b R , and all of the leptons. For the ρ L , the couplings are of O(g 2 /g ρ L ), while for ρ 0 R,X , they are of O(g 2 /g ρ R,X ). For the couplings between ρ and the third generation quarks, there are additional contributions of O(g ρ s 2 θ L,R ): with the final term only arises for the partially composite right-handed top quark. All the remaining coupling vertices can only be present after EWSB. Therefore, they are suppressed further by ξ and irrelevant for the phenomenology of the composite resonances. The production and the decay of the resonances at the LHC In this subsection, we discuss the production and decay of the composite resonances. The cross sections are calculated by first implementing the benchmark models into an UFO model file through the FeynRules [24] package and then using MadGraph5 aMC@NLO [25] to simulate the processes. Most of the calculations are carried out at the LO. The only exception is the QCD pair production of top partners, for which we use the Top++2.0 package [26][27][28][29][30][31] to obtain the next-to-next-to-leading-order (NNLO) cross sections. See appendix D for the cross sections at different proton-proton center-of-mass energies. For the decay widths, we have used the analytical formulae calculated by the FeynRules. Production at the LHC We start from the production of the vector resonances at the LHC. The vector resonances ρ will be dominantly produced via the Drell-Yan processes inspite of their suppressed couplings ∼ g 2 SM /g ρ to the valence quarks [14,32]. Although the ρ resonances are strongly interacting with the longitudinal SM gauge bosons, as shown in eq. (2.8), the electroweak Vector-Boson-Fusion (VBF) production can barely play an useful role in the phenomenology of the ρ at the LHC [14,32]. For example, for g ρ L = 3 and M ρ L = 3 TeV, the W + W − → ρ 0 L fusion cross section is two orders of magnitude smaller than that of the Drell Yan process. In figure 1, we have shown the M ρ dependence of the Drell-Yan production cross section for the charged resonances ρ ± L,R and neutral resonances ρ 0 L,R,X , fixing a 2 ρ = 1/2. For the production of the charged resonances, we have summed over the ρ + and ρ − contributions. The cross sections are decreasing functions of the strong coupling g ρ , as expected from the coupling scaling in tables 2 and 3. The only exception is the production rare of the charged ρ ± R , whose couplings to the valence quarks arise after EWSB and are of order g ρ R a 2 ρ R M 2 W /M 2 ρ R . As we are fixing a ρ in the plot, the cross section is larger for larger g ρ R , as shown figure 1. We also notice that generally, ρ 0 L has one order of magnitude larger production rate than the ρ 0 R,X case because of the smallness of U(1) Y hyper-gauge coupling g in comparison with SU(2) L gauge coupling g. In figure 1, we have calculated the cross sections using the 4-flavor scheme. The inclusion of bottom parton distribution function (PDF) will increase the cross sections of ρ 0 L,R,X . As shown in table 3, the ρ 0 L,R,X b LbL couplings in models with quartet top partners have contributions of O(g ρ s 2 θ L ) due to the mixing of b L and B L , which can considerably enhance the cross section in some parameter space. For example, in LP 4 , for y L = 1 and M 4 = 1 TeV, g ρ L = 3, and M ρ L = 3 TeV, the bb fusion can increase σ(pp → ρ 0 L ) by 34%. In the following section, when we will study the bounds from the searches at the LHC, we also include the bb fusion production. The production of fermion resonances can be categorized into QCD pair production and electroweak single production processes (see ref. [20] for detail). The QCD production rate depends only on the mass of top partners. Since two heavy fermions are produced, the rate drops rapidly when the resonance's mass increases because of the PDF suppression. In contrast, the single production channels typically have larger rates in the high mass region, thus it can play an important role in the search for heavier resonance [20,[33][34][35][36][37][38][39][40][41]. This effect can be clearly seen from the figure 2, where we have plotted the cross sections for the resonances in the quartet at the 13 TeV LHC as functions of the Lagrangian parameter M 4 . For these plots, we have chosen the following parameters: where the parameter y R or y 2L is determined by the top mass requirement for the partially composite t (P) R in eq. (B.18) (the "P 4 scenario") or for the fully composite t (F) R in eq. (B.21) (the "F 4 scenario"), respectively. For the single production, we have combined the contribution of the top parters and their anti-particles. For example, for the charge-5/3 resonance -9 -JHEP01(2019)157 X 5/3 in the quartet case, the tW fusion process is defined as σ(tW → X 5/3 ) ≡ σ(pp → X 5/3t q +X 5/3 tq). (2.17) The tW → B and tZ → T, X 2/3 processes are defined in a similar way. Figure 2 shows that, for both P 4 and F 4 scenarios, tW → X 5/3 has the largest production rate among the 4 single production channels of the quartet fermionic resonances, and it dominates over the QCD pair production channel for M 4 1 TeV. Although the tW → X 5/3 rates of those two scenarios are similar under our parameter choice, the rate of tW → B channel in P 4 scenario is less than that in F 4 scenario. This is because the former is from the compositeelementary Yukawa interaction −y RBL φ − t (P) R (see eq. (B.17)) and proportional to c 2 θ L , while the latter is mainly controlled by the strong dynamics term −( √ (B.20)) without such suppression. As c θ L will increase with M 4 , we see the values of the two green lines in figure 2a and figure 2b become similar at large M 4 . By naively using the Goldstone equivalence theorem, we expect if y L f /M 4 1 and the mass splittings of the top partners become negligible. From the figures we find that in the F 4 scenario it is indeed the case, but in the P 4 scenario it is not. The reasons is that in the P 4 scenario, large M 4 requires large y R to correctly reproduce the mass of the top quark (see eq. (B.18)), which results in a large mixing between the T and X 2/3 resonances as shown in eq. (B.19). Hence the naive estimate in eq. (2.18) does not hold. We emphasize that the single production rates are more model-dependent. For example, the tZ/W fusion rates in P 4 scenario increase when y L decreases. This is because the constraint from observed top quark mass requires a larger y R as y L decreases, while the fusion rates are proportional to (y R ) 2 . But in the F 4 scenario, the cross sections are rather insensitive to y L , since they are mainly determined by the c 2 term. Similar to the quartet case, the single production mechanism of the singlet top partner T dominates over the QCD pair production if it is heavier than O(1) TeV, as shown in figure 3. Besides the tZ → T fusion, the singlet can also be produced by bW fusion: In fact, the cross section of this channel is about an order of magnitude larger than the tZ fusion due to the large bottom PDF, as can be seen from the red solid lines in figure 3. Note that for the partially composite t (P) R scenario, we have chosen a somewhat larger value y L = 1.5 in order to correctly reproduce the mass of the top quark in eq. (B.39). Decay of the composite resonances Let's now turn to the decay of the vector resonances. 2 The decay branching ratios into different final states are determined by both the kinematics and the sizes of the couplings between the vector resonances and the final state particles. The parameter f is determined by eq. (B.14) and the parameters y R (LP 4 ), y 2L (LF 4 ) are fixed by reproducing the observed top quark running mass M t = 150 GeV at the TeV scale. Several comments are in order. In the low mass region M ρ L < M 4 , ρ L can only decay into SM final states. Since we are interested in the mass region M ρ M W,Z,h , we can neglect all the SM masses. Hence, the decaying branching ratios are completely determined by the couplings among ρ L and SM particles. As discussed above, only ρ L V L V L (h) (V = W, Z) couplings belong to the first class and are enhanced by the strong coupling g ρ L . Besides this, there are ρ LqL q L couplings, where q L are third generation left-handed quarks. They are of O(g ρ L s 2 θ L ) and can be relevant for the moderate size of s θ L . Therefore, the dominant decay channels for this mass region are JHEP01(2019)157 as shown in figure 4. There are no significant differences between the two models in this kinematical region. From the Goldstone equivalence theorem, the decay branching ratio of ρ + L into W + Z is the same as W + h in the limit of M ρ L M W,Z,h (see eq. (2.8)). We only plot the sum of the two channels in figure 4. The same argument applied to the W + W − , Zh decay channels of ρ 0 L . We also notice that for the SM light fermion channels, we have (c) The branching ratios of ρ + L in LF4. the accidental relations Br(ρ + L → jj) = 2×Br(ρ + L → + ν ) and Br(ρ 0 L → jj) = 2×Br(ρ 0 L → + − + ν ν ) as illustrated by refs. [14,42]. For the intermediate mass region, i.e. M 4 < M ρ L < 2M 4 , the decay channels with one third generation quark and one top partner (the "heavy-light" channels) are open kinematically. For the charged resonance ρ + L , we have plotted the sum of branching ratios of the decay channels tB and Tb and the sum of the decay channels X 5/3t and X 2/3b . For -12 - JHEP01(2019)157 the neutral resonances ρ 0 L , we have combined the channels tT and bB and their charge conjugate processes. Let's start the discussion from the model LP 4 . The branching ratios of such channels grow quickly once they are kinematically open. This rapid increase is due to the strong coupling enhancement. At the same time, there is also a difference between the tB + Tb channels and the X 5/3t + X 2/3b channels. The branching ratio for the former increases as M ρ + L becomes larger, while the branching ratio of the latter increases at the beginning then decreases as the mass of ρ + L increase. We first note that the couplings ρ + L X 5/3t , ρ + L X 2/3b are suppressed by the fine-tuning parameter ξ = v 2 /f 2 (see table 2). Since g ρ L and a ρ L are fixed, increasing mass M ρ + L will result in an increasing of the decay constant f and a smaller ξ parameter. The same behavior is also observed in the neutral resonance ρ 0 L decay channels of tT +bB and X 2/3t and their charge conjugates due to similar reasons. There is a difference here between the two models LP(F) 4 . For the partially composite t (P) R scenario, the decay channels ρ + L → X 5/3t + X 2/3b and ρ 0 L → X 2/3t +X 2/3 t can become sizable ∼ 10%. However, for the fully composite t (F) R , their branching ratios are below 1%. This is due to the fact that the couplings ρ + as can be seen clearly from table 2 and table 3. We also notice that ρ + L → tb, ρ 0 L → tt + bb decay channels are always sizable even in the intermediate mass region and the high mass region M ρ L > 2M 4 . This is due to the fact that we are fixing y L and M 4 . Hence, increasing M ρ L will also increase f . As a result, the left-handed mixing angle s θ L becomes larger. The branching ratio ranges from 20% to 40% in the intermediate mass region and above 10% in the high mass region. For the mass region of M ρ L > 2M 4 , the pure strong dynamics channels are kinematically allowed. Since their couplings are of O(g ρ ) and we expect that they will dominate. Among those channels, the ρ + L → X 5/3X2/3 channel has the largest branching ratios (above 60%), because they are the first and second lightest top partners. Note that the decaying channel into TB opens very slowly. In the parameter space under consideration, its branching ratio is always below 10% and smaller than those of the decay channels tb and tB + Tb. This behavior is due to the particular choice of our parameters in eq. (2.20). In particular, the masses of T, B are roughly given by Even for large M ρ L , the masses of T, B are ∼ 0.47 × M ρ L and the decay into TB suffers from phase space suppression. We also expect that other choices of the parameters (for example smaller value of y L ) will make this channel more relevant. Things are similar in the case of ρ 0 L , where the decay channels intoX 5/3 X 5/3 ,X 2/3 X 2/3 are dominant (> 60%) andT T +BB decaying channels are below 10%. Next we turn to the (1, 3) resonances ρ ±,0 R . The benchmark point is the same as that in the ρ ±,0 L case, with the replacement ρ L → ρ R . Unlike ρ + L , the ρ + R does not mix with SM gauge bosons before EWSB because of its quantum number. Consequently, its decay branching ratios to SM light fermions are tiny. For example, it is less than 10 −3 for the parameter space shown in figures 5a and 5c. The decaying branching ratio into tb is also suppressed because the corresponding coupling arises after EWSB and is of order (c) The branching ratios of ρ + R in RF4. O(g ρ R ξ). As a consequence, the ρ + R mainly decays into di-boson channels W + h + W + Z. In the intermediate mass region, the decaying into X 5/3t + X 2/3b channels dominate over all the other channels with branching ratio larger than 90% in both model LP 4 and LF 4 , as their left-handed couplings arise before EWSB. The decay channels into tB + Tb are very small (2% − 4%) for model LP 4 and below 10 −3 for model LF 4 . In the high-mass -14 - JHEP01(2019)157 region, the dominant decaying channels are X 5/3T + X 2/3B and X 5/3t + X 2/3b with similar branching ratios. It is interesting to see that the heavy-light decay channel is still sizable in the high-mass region, as the mixing angle s θ L becomes larger for larger ρ + R mass and the mass of T , B increase with M ρ R as discussed before. The neutral resonance ρ 0 R mixes with the SM Hypercharge gauge boson before EWSB, resulting in the relation Br(jj) = 22/27×Br( + − +ν ν ) [42], as shown in figures 5b and 5d. The branching ratios of the other decay channels of ρ 0 R are very similar to those of ρ 0 L , and we will not discuss them further. Finally, we study the (1, 1) resonance ρ 0 X . As an SO(4) singlet, the ρ 0 X can couple either to quartet Ψ 4 or to the singlet Ψ 1 , and the corresponding models are XP(F) 4 and XP(F) 1 , respectively. In our plots, the parameters chosen are very similar to the benchmark point of ρ ±,0 L , except for XP 4 where we choose y L = 1.5. For the XF 4,1 model, there is another parameter c 1 describing the direct interaction between the fully composite t (F) R and the ρ 0 X resonance, and it is set to be 1. For the XF 1 model, we further set c 1 (the parameter describing the interaction between t (F) R and the ρ 0 X , Ψ 1 resonances) to be 1. Since the U(1) X has no direct connection to the dynamical symmetry breaking SO(5) → SO(4), its corresponding spin-1 resonance ρ 0 X does not couple to the Goldstone boson H before EWSB. Consequently, the decaying branching ratios into SM di-bosons W + W − + Zh are very small (< 10 −4 ). The di-fermion decay channels of XP(F) 4 are very analogous to those of ρ 0 R in RP(F) 4 . The most relevant channels are ρ 0 X → tt + bb in the low-mass region, ρ 0 X → tT +tT + bB +bB in the intermediate mass region, and ρ 0 X →X 5/3 X 5/3 +X 2/3 X 2/3 in the high-mass region. In models with singlet top partner XP(F) 1 , since the b quark does not mix with the resonance, we classify as one of the "SM light fermions". Therefore, we have Br(jj) = Br( + − + ν ν ), as shown in the bottom panel of figure 6. In model XP 1 , the dominant decaying channels are ρ 0 X → tt in the low-mass region, ρ 0 X →t T + T t (∼ 70%) in the intermediate mass region and ρ 0 X → T T (∼ 70%) in the high mass region. The situation is similar in the model XF 1 except that in the high-mass region, the ρ 0 X → tt and ρ 0 X →t T + T t decaying channels are also relevant. Their branching ratios are around 20% and 40%, respectively. The present limits and prospective reaches at the LHC In this section, we present the current limits and prospective reaches for the simplified models at the LHC. Making projections For the projections at the high luminosity or high energy LHC, we extrapolate from the current LHC searches using a similar method as in ref. [43]. We described the method in detail in appendix E. There have been a number of searches for beyond the SM (BSM) resonances at the LHC, providing constraints to the composite Higgs models. To use a more generic and uniform notation in describing the searches, we denote the spin-1 resonances as ρ and the spin-1/2 resonances as F Q , where Q is the electric charge. The results at the 13 TeV LHC The branching ratios of ρ 0 X in XF1. Collaboration and corresponding integrated luminosity CMS at 77.3 fb −1 (for e channel) and 36.3 fb −1 (for µ channel) [12]. can be classified into two main groups. The first group is the Drell-Yan production and two-body decay of ρ, its various final states can be summarized as follows, 1. SM di-fermion final states, including di-lepton, di-jet, and the third generation quarkinvolved channels. We list the relevant measurements in table 5. In addition, there have been searches for singly produced top partners. Such channels typically have larger rates than the QCD pair production. However, they are also more model-dependent. Currently, the bW → F 2/3 → bW channel is explored by ATLAS at 3. For the new channels we propose in this paper, especially the cascade decays of the ρ resonances to the heavy fermionic resonances, there have been no dedicated searches. We estimate their exclusion by recasting existing searches using the SSDL final states ± ± +jets. In table 7, we have listed the existing searches for the resonances at the LHC using ± ± +jets final state. The upper limit on the cross section of the highest mass points considered in the searches and the corresponding number of events before any kinematic cuts are reported in the table. Motivated by these results, we assume that a limit can be set for N ( ± ± + jets) = 20 before any kinematical cuts. SM di-boson final states. The topology is qq Next, we present the results for models LP(F) 4 , RP(F) 4 , XP(F) 4 and XP(F) 1 in subsequent subsections. The results of LP 4 and LF 4 In this subsection, we investigate the current limits and prospective reaches on the models LP(F) 4 at the 13 TeV LHC. In the Lagrangian of LP 4 in eq. (B.4), there are 10 parameters: The results of LP4. The results of LP 4 and LF 4 are shown in figures 7a and 7b, respectively. Since g ρ L is fixed, f is determined by M ρ L , and we use its value to label the top horizontal axis. We plot the existing bounds from LHC searches and their extrapolations at 300 (3000) fb −1 in colored shaded regions. Besides the direct searches for resonances, the measurement of and ξ parameter can provide an indirect constraint. Currently LHC results imply ξ 0.13 [21,22], while the further constraints are expected to be as good as 0.066 (0.04) with 300 (3000) fb −1 of data [89][90][91]. We also plot the constraints on ξ in the figures as vertical black thin lines. Putting all the constraints and projections together, we see that the future data at the LHC will explore the parameter space of LP(F) 4 extensively. 4 The constraints are similar in the two models LP(F) 4 . For a relatively large value of g ρ L (for example, g ρ L = 3 in our benchmark point), the most sensitive channel in the M ρ L < 2M X 5/3 region is the W ± Z/W + W − search with boosted di-jet channel performed by the ATLAS Collaboration with integrated luminosity L = 79.8 fb −1 in ref. [11]. In such a mass region, the Γ ρ L /M ρ L ratio is ∼ 0.8% for our chosen parameters, thus the narrow width JHEP01(2019)157 approximation works very well. If g ρ L 2, the di-lepton + − channel by CMS with L = 77.3 fb −1 (e + e − ) + 36.3 fb −1 (µ + µ − ) [12] gives the strongest limit. Because of the large experimental uncertainty, the ρ → tt, bb and tb channels are not able to give competitive limits, although they have significant branching ratios. In figure 7, we only show the present limits and prospective reaches from ATLAS di-boson boosted jet channels in ref. [11]. It is clear from the figure that the interactions with light top-partner has affected the phenomenology of ρ L significantly. In particular, the present bound is relaxed from 4.2 TeV to 2.6 TeV for our benchmark parameters in eq. (3.1) as the mass of top partner changes from M X 5/3 M ρ L to M ρ L 2M X 5/3 . Once the decays into pair of top partners are kinematically open, the bound becomes very weak. At the same time, very light top partners have been excluded by the direct searches for the top partner. In the mass region of M X 5/3 < M ρ L 2M X 5/3 , the decays of ρ L into one top partner and one SM particles are kinematically allowed. The width of the ρ L resonance is enhanced by the existence of those new channels, but still within the narrow width range. For example, in our benchmark eq. (3.1), The ttZ final state from the decay channel ρ 0 L → tT has been studied both experimentally [70,94] and theoretically [95], but current experimental results are still too weak to be visible in figure 7. The ρ ± L → Tb → tbZ channel is studied phenomenologically in ref. [96]. In this work, we propose that the ttW ± → ± ± + jets final state from ρ ± L → tB/tX 5/3 can also be a good channel to probe such a heavy-light decay. In figure 7, we have plotted the contours for the constant number (= 20) of SSDL events summing all these decay channels at 300 fb −1 and 3 ab −1 LHC. These channels have sensitivity to the parameter space up to M ρ L = 3.8 TeV at 3 ab −1 LHC, but it still can't compete with the di-boson jet searches. This is due to the fact that the branching ratios into the heavy-light channels are not significantly larger than the di-boson channel and the decaying branching ratios to the SSDL are very small. It is interesting to explore other more complicated final states like 1 + jets and we leave this for future possible work. In the mass region of M ρ L > 2M X 5/3 , the spin-1 resonances will decay dominantly into pairs of top partners, as discussed in detail in section 2.2.2. We focus here on the decay channels resulting in the SSDL final states: ρ ±,0 L → X 5/3X2/3 /X 5/3X5/3 (see also refs. [96,97] for the study of these channels). We plot the contours with 20 SSDL events, summing over all the above decay channels for 300 fb −1 and 3 ab −1 LHC. The prospective for the cascade decay channels are very promising and comparable with direct searches for the pair produced X 5/3 . If the top partner is around 1 TeV, these channels can be promising to discover the heavy spin-1 resonance. 5 Note that in such region the Γ ρ L /M ρ L can be large. For example, for our benchmark point eq. (3.1), Γ ρ L /M ρ L varies from 56% to 37% when M X 5/3 varies from 0.1 × M ρ L to 0.4 × M ρ L . It is interesting to study the effects of large decay width on the resonance searches and we leave this for a future work. Here we just estimate the bounds by an event-counting method based on the SSDL final 5 If the first generation light quarks have some degrees of compositeness as studied in ref. [98], the cascade decay channels are more important as the Drell-Yan cross sections of ρL are enhanced by the extra piece of coupling of O(gρs θ 1q ). Here θ1q is the mixing angle between the first generation quark and the corresponding partners. state, which does not require the reconstruction of a resonance peak. We expect such an estimate has less dependence on the width of ρ L . We have shown the present bounds and the prospectives of the searches for QCD pair produced X 5/3X5/3 in the 1 + jets final state by CMS [13]. 6 The single top partner production may play an important role in the relatively high top parter mass region as discussed in section 2.2.1. Currently, the tZ → T /X 2/3 channel has been searched by CMS at 35.9 fb −1 [70], and the tW → X 5/3 channel has been searched by CMS at 35.9 fb −1 [86] in 1 + jets final state and by ATLAS at 36.1 fb −1 [81] in SSDL final state. However, the mass reaches of all those searches are still too low to be visible in our figures. Instead, in figure 7 we present the contours with constant number of events (= 20) in the tW → X 5/3 → ± ± + jets final states as a projection for the future run of the LHC. The reach in model LP 4 range from 1.5 TeV to 2 TeV at the 300 fb −1 LHC and from 2.3 TeV to 3.1 TeV at the 3 ab −1 HL-LHC which is better than the QCD pair searches (1.3 TeV at 300 fb −1 and 2.0 TeV at 3 ab −1 ). The results of RP 4 and RF 4 We now turn to discuss the models RP(F) 4 . Similar to the cases of LP(F) 4 , we have set the following parameters as and scanned over (M ρ R , M X 5/3 ). The results are plotted in figure 8. The meanings of the shaded regions and contour lines are similar to those in figure 7. Note that we have started from M ρ R from 1 TeV. Because the production cross sections of charged ρ ± R resonances are very small, we only use the searches for the Drell-Yan production of ρ 0 R at the LHC. Similar to the search for the ρ L resonances, the di-boson channel provides the strongest constraints in the region of M ρ R < 2M 4 . Among the existing limits, we found that the diboson resonance searches by ATLAS in the semi-leptonic channel [57] and in the fully hadronic channel in [11] give the strongest constraints, and their results are similar. Here we show the limits from results of ref. [57]. As expected, due to the smallness of hypercharge gauge coupling, the bound is weaker than the ρ L resonances. The present bound is around 1.6 TeV and will reach 3.8 TeV at the HL-LHC. In the mass region of M X 5/3 < M ρ R 2M X 5/3 , the ρ 0 R → tX 2/3 → ttZ may be relevant, but the current search in ref. [70] is still not possible to put any relevant constraint in our parameter space. Thus, it is not shown in the figure. In the mass region of M ρ R > 2M X 5/3 , the cascade decay channel ρ 0 R → X 5/3X5/3 in the SSDL final state is not comparable with the searches for the QCD pair X 5/3 production, due to the smallness of the production cross section. We can also read from the figure that the electroweak precisionŜ-parameter measured by LEP [101] sets a strong constraint on the models with ρ R , requiring M ρ R 1.95 TeV, which is heavier than current experimental reach. However, the reach of LHC with an integrated luminosity of 300 fb −1 could surpass this constraint. The bounds for the top parters are the same as models LP(F) 4 and not discussed here anymore. (a) The results of RP4. The results of RF4. [13] are shown as the darkest shaded regions, while the projections for 300 (3000) fb −1 are shown as lighter shaded regions. The event number contours for N ( ± ± +jets) = 20 are drawn in solid (dashed) lines for 300 (3000) fb −1 , as a prospective limit for the ρ 0 R → X 5/3X5/3 (denoted as ρ R → F F ) and the tW → X 5/3 channels. The results of XP 4 and XF 4 We now turn to the models with a singlet vector resonance ρ 0 X . In this subsection we will discuss its interactions with the quartet top partner in models XP(F) 4 , while in the next subsection we will investigate its interactions with the singlet top partner XP(F) 1 . As discussed in ref. [14], ρ X only contributes to the Y -parameter of the electroweak precision test (see also eq. (B.33)). Due to the (g /g ρ X ) 2 suppression, the indirect constraint on the ρ X is weak. As a result, ρ X could be very light especially in the case of large g ρ X . We choose the benchmark values for the parameters as and scan over (M ρ X , M X 5/3 ) in figure 9. Note that we have chosen a slightly smaller value of a ρ X in order to relax the bound from ξ measurement. Here we can see a difference between the partially composite t (P) R and the fully composite t (F) R scenario. While the dilepton channel [12] can play an important role in model XP 4 in the large M X 5/3 region (i.e. M X 5/3 > M ρ X ), it won't put any significant constraint on the model XF 4 . This is due to the fact that the branching ratio of di-lepton in the model XP 4 scales like [g /(g ρ X s θ L )] 4 , while in model XF 4 , it scales like (g /g ρ X ) 4 . As we fix y L , larger value of M X 5/3 will induce smaller value of s θ L and an enhancement of the di-lepton branching ratio in model XP 4 . The results of XF4. [13] are plotted as shaded regions. The green regions come from ttρ 0 X associated production, by the phenomenological study of ref. [102]. The purple regions represent the limit from the + − search [12] and its extrapolations. The contours for N ( ± ± + jets) = 20 are drawn with solid (dashed) lines for 300 (3000) fb −1 , as a prospective reach for the ρ 0 X → X 5/3X5/3 (denoted as ρ X → F F ) and the tW → X 5/3 channels. See the text for more details. Note that in the region M ρ X M X 5/3 where ρ 0 X only decays to SM particles, the tt and bb channels dominate. The sensitivity in these channels at the 13 TeV LHC is roughly three order of magnitude worse than the di-lepton channel, assuming the same branching ratios. Thus they can only play a role in the large g ρ X region. However, large g ρ X will lead to small Drell-Yan production cross section and make tt, bb channels not relevant in our parameter space. In contrast, the authors of ref. [102] have pointed out that the pp → ttρ 0 X → tttt channel with the SSDL final states can probe the fully composite t (F) R scenario very well, as the production cross section scales like g 2 ρ X . In figure 9, we have reinterpreted the results of ref. [102] in our parameter space in model XF 4 . We see that ρ 0 X with mass below 2 (2.4) TeV can be probed at 300 (3000) fb −1 LHC with our choice of g ρ X = 3 in model XF 4 . While for model XP 4 , the bound (not shown in the figure) is weaker (∼ 1.0 TeV at 3 ab −1 ) due to the suppression of ρ X tt couplings either by the t L − T L mixing or the B µ − ρ Xµ mixing. We can also see that the limits from ttρ X channel become stronger in the low M X 5/3 region in model XP 4 , as the left-handed top quark mixing angle s θ L becomes large. We also noticed that the cascade decays to top partner can barely play an important role, as the cross section of ρ 0 X is small. The bounds on the quartet top partners are the same as models LP(F) 4 . . The dark green regions come from ttρ 0 X associated production, based on the phenomenological study of ref. [102]. The contours for N ( ± ± + jets) = 20 are drawn in solid (dashed) lines for 300 (3000) fb −1 , as a prospective limit for the bW → T → tρ 0 X (tt) channel (in black) and ttρ X → ttt T (bW ) channel (in red). See the main text for more details. The results of XP 1 and XF 1 Finally we come to the models containing a singlet top partner, XP 1 and XF 1 . While the scanning over (M ρ X , M T ), the other parameters are chosen as where we have chosen a slightly larger value of y L in model XP 1 in order to reproduce the observed value of top quark mass. Note that in model XP 1 , the top quark mass is approximately given by eq. (B.39) and the choice for y L in eq. (3.4) has fixed s θ R ∼ 0.6. This means that the couplings of the interactions ρ XtR t R , ρ XtR T R are roughly constants with varying mass of the top partner (see table 4). In both models, the Drell-Yan production of the ρ X can't play an important role in our interested parameter space, because of the lack of the sensitivity to the dominant decay channel tt and the suppression of the decay branching ratio into the di-lepton final state. In figure 10, we have shown the reach from the ttρ X production with the SSDL channel, including the analysis of ref. [102] in eq. (B.36)). For the top partner, we present the current limits and prospective reaches coming from the ATLAS searches for the QCD pair production of the top partner with the bW +b W − (1 + jets) final states [74]. Note that the single top partner searche performed by ATLAS in ref. [84] with integrated luminosity L = 3.2 fb −1 using the bW (→ ν) decay channel is not sensitive to our parameter space yet. 7 Instead, we find that the cascade decay of the top partner T into ρ X t with ρ X decaying into top pair in the single production channel can become relevant in the mass region of M T > M ρ X . For example, for M T = 2 TeV and M ρ X = 1 TeV, the branching ratio can reach 65.8% (93.8%) for XP(F) 1 in our parameter choice, due to the large coupling of ρ X t R T R in both models. Moreover, it will lead to the SSDL signature. In figure 10, we have estimated the reach of this channel with SSDL searches at the LHC with integrated luminosities 300 fb −1 and 3 ab −1 . This channel is very promising, and can become comparable with the four top final states in both models, especially in XP 1 . This is due to the fact that in model XP 1 , the branching ratio of this cascade decay channel is further enhanced by the s 2 θ R suppression oft X coupling, as can be seen from table 4. Summary In summary, focusing on the coupling regime g ρ ∼ 3, we have investigated the present limits and prospective reaches in the M ρ − M 4 , the mass region of M ρ > M 4 can also be explored by Drell-Yan production followed by decaying into the heavy-light final state tB/X 5/3t and the pure strong dynamics final state X 5/3X2/3 /X 5/3X5/3 in the SSDL channel. For the SO(4) singlet resonance ρ X (1, 1), the sensitivity to the dominant tt final state from Drell-Yan production is still limited by the experimental uncertainty. Instead, the + − channel is useful for XP 4 , while the ttρ 0 X associated production is useful for XF 4 and XF(P) 1 , as the cross section scales like g 2 ρ X (g 2 ρ X s 4 θ R ) and it can lead to four top final states with SSDL signature. We have recasted the analysis of ref. [102] in this SSDL channels in our parameter space. The cascade decaying channels (heavy-light and heavyheavy) in models XP(F) 4 can rarely play an important role because the cross section is small in the high mass region, and the very light top partners have already been excluded by the present experiments. In models XP(F) 1 , we find that the SSDL final states from the bW → T → tρ 0 X process can be very important in the M ρ X < M T region, while the SSDL channel of ttρ X → ttt T can be relevant in intermediate mass region. Finally, the QCD 7 For the theoretical studies of bW → T → bW/tZ/th channels, see refs. [34,40,[103][104][105][106][107]. (a) The results of LF4. pair production of top partners offers a robust probe for the models. At the same time, the singly produced channels have a much higher mass reach. For example, for the models with quartet top partners, the QCD pair channel and tW → X 5/3 channel could probe the parameter M X 5/3 up to ∼ 2 TeV and ∼ 2.5 − 4 TeV (depends on the f parameter) at the HL-LHC, respectively. The limits and reaches of the mass scale from present and future searches at he LHC are summarized in figure 12 (for models LP(F) 4 and RP(F) 4 ) and in figure 13 (for models XP(F) 4 and XP(F) 1 ). Future colliders Before we conclude our study, we make some estimates of the prospective reaches on the mass scales in our models at the 27 TeV HE-LHC and 100 TeV pp collider. In figure 11, we have used the method described in appendix E to extrapolate, based one the di-boson boosted-jet resonance searches at ATLAS [11] and the pair top partner searches in the 1 + jets channel at CMS [13] in model LF 4 . We present the results with the integrated luminosities of 3 ab 1 (1, 1). In addition, we have also studied the two scenarios depending on whether the right-handed top quark is elementary or fully composite. We have categorized the couplings of the composite resonances into four classes according to their expected sizes, O(g ρ ), O(g ρ s θ L , g ρ s θ R ), O(g SM ), and O(g 2 SM /g ρ ), where s θ L,R are the elementary-composite mixing angles s θ L , s θ R , and g SM is of the size of the Standard Model gauge and Yukawa couplings. The results are summarized in table 2, table 3 and table 4. Based on the discussion of the couplings, we have studied different production and decay channels for the composite resonances, paying special attention to the relevance of the cascade decay channels between the composite resonances. We have shown the present and future prospective bounds on our parameter space in the M ρ − M Ψ plane in different models, focusing on the moderate large coupling g ρ = 3. We found that the cascade decay channels into one top partner and one top quark tΨ or two top partners ΨΨ strongly affect the phenomenology of the ρ if they are kinematically open. Their presence significantly weakens the reach of the channels with only SM particles, such as the di-boson channel. In addition, the decay channels ρ + L → tB/X 5/3t and ρ + L → X 5/3X2/3 , ρ 0 L,R,X → X 5/3X5/3 can lead to the SSDL final states, which are used as an estimate of the reach on the M ρ − M Ψ plane. We found that they are comparable in some regions of the parameter space to the di-boson searches or the top partner searches at the HL-LHC, especially for the ρ L models LP(F) 4 . For the ρ R,X models RP(F) 4 , XP(F) 4 , because the Drell-Yan production is suppressed by the smallness of the hypercharge gauge coupling, the cascade decay channels play less important roles. We also find that the SSDL channels in the single production of the charge-5/3 top partner X 5/3 can always play an important role in our parameter spaces. In the models involving the singlet spin-1 resonance XP(F) 4 and XP(F) 1 , the associated production of top pair and the ρ X with the four top final states can play an important role, as the coupling between ρ X andtt is of O(g ρ ) for the fully composite t (F) R models and O(g ρ s 2 θ L ) or O(g ρ s 2 θ R ) for the partially composite t (P) R models. We have recast the analysis in the SSDL channel by ref. [102] in our parameter space. In models XP(F) 1 , the single production of the top partner T , followed by cascade decaying into tρ X (tt) can be important in the region M T > M ρ X , and we have explored its sensitivity in the SSDL channel. It can be better than the ttρ X (tt) SSDL channel in model XP 1 . In the mass region M T < M ρ X < 2M T , the tt fusion production of ρ X , which decays into t T , can lead to the tttbW + final state with SSDL signature. We have used this to explore its sensitivity. In figure 12 and figure 13, we have summarized the prospective reach on the mass scale M ρ and M Ψ by the different existing searches at the LHC and by various SSDL channels from the cascade decays. JHEP01(2019)157 Several directions should be explored further. Among the various cascade decay channels, we have only considered the SSDL final state. The reach obtained this way is conservative. Other decay final states, such as 1 +jets, should also be studied in detail. The final kinematical variables are usually very complicated, and new techniques such as machine learning may be useful to enhance the sensitiy. We hope to address the issues in a future work. JHEP01(2019)157 where A µ ≡ A a µ T a are the gauge fields corresponding to the unbroken generators. The d µ and e µ objects will transform under the non-linearized SO(5) group as: In MCHMs, only the subgroup SU(2) L × U(1) Y ∈ SO(4) × U(1) X is gauged, i.e. The last gauge field X µ , corresponding to the U(1) X group, is introduced to give correct hypercharge for the fermions, and the Goldstone bosons are neutral under this symmetry. The full formulae of d µ and e µ symbols can be obtained as follows [108] where the covariant derivative is given by: and the matrices t a L/R are defined in eq. (A.2). Because of eq. (A.5), the leading Lagrangian of the Goldstone fields is simply For the fermionic heavy resonances, they fall into the irreducible representations of the unbroken group SO(4) × U(1) X SU(2) L × SU(2) R × U(1) X . We will consider two irreducible representations: the quartet 4 2/3 and the singlet 1 2/3 as the lightest top partners. They are parametrized as follows: and transform as Ψ → H r Ψ ⊗ G X Ψ, where r Ψ is the SO(4) representation of Ψ, and G X denotes the group element of U(1) X . From the transformation rules in eq. (A.5), we can construct a covariant derivative acting on the composite fermionic fields Ψ: Taking into account of the U(1) X group, the covariant derivative becomes (∇ µ − ig 1 XB µ ). For the spin-1 resonances, we consider three irreducible representations under the unbroken SO(4): ρ L (3, 1), ρ R (1, 3) and ρ X (1, 1). JHEP01(2019)157 A.2 The matching to the Higgs doublet notation The CCWZ operators and the effective Lagrangians for the composite resonances can be written in terms of the fields that have the definite quantum number under the SM gauge group SU(2) L × U(1) Y . To see this, we first notice that the SM Higgs doublet with hypercharge Y = 1/2 can be written as follows: It is related with the quartet notation h by an unitary matrix P with determinant -1: The SO(4) generators can be converted to the doublet notation by using P : (A.14) Consequently, the h covariant derivative term can be rewritten as: where the D µ in the right-hand side of the equation is the normal SM covariant derivative: where hypercharge Y is given by Y = T 3 R + X. Using above results, we can easily rewrite the leading Lagrangian in eq. (A.9) in the doublet notation: For further convenience, we list the following useful identities: where the ↔ D µ is defined as: The quartet top partner fields, Ψ 4 can be decomposed as two SU(2) L doublets with hypercharge Y = 1/6, 7/6 as follows: with the same P matrix as defined in eq. (A.13). The SM fermions are assumed to be embedded in the 5 X representation of SO(5) × U(1) X with hypercharge given by Y = T 3 R + X. We only consider the top sector in our paper. For the SM SU(2) doublet q L = (t L , b L ) T , we have the embedding: The q 5 L formally transforms under the G ∈ SO(5) and G X ∈ U(1) X as q 5 L → G ⊗ G X q 5 L . For the right-handed top quark, we will consider two possibilities: t R as an elementary filed or as a massless bound state of the strong sector. In the first case, we also embed it in the representation of 5 2/3 : For the fully composite right-handed top quark, we assume that it is a singlet of SO(4), denoted as t (F) R and its interactions preserve the non-linearized SO (5). We denote those two treatments as partially and fully composite t R scenario, respectively. All the effective Lagrangian in MCHMs can be rewritten in terms of the doublet notation easily using eq. (A.7), eq. (A.18), eq. (A.20), eq. (A.21) and eq. (A.22). The full results are tedious, thus we will not list them here; however, their LO expansions in H † H/f 2 order will be listed and discussed in appendix B. B The models In this section, we briefly describe the models considered in our paper (see refs. [14,20,23]). We focus on the minimal coset SO(5) × U(1) X /SO(4) × U(1) X of the strong sector, where the Higgs bosons are the pseudo-Nambu-Goldstone bosons associated with this global symmetry breaking. B.1 The models involving ρ L (3, 1) and quartet top partners Ψ 4 (2, 2): LP(F) 4 We start from the models involving the ρ L and the quartet top partners Ψ 4 . The Lagrangian of the strong sector reads: where the field strength of the spin-1 resonance is defined as The Yukawa interactions between strong and elementary sector are: The fully Lagrangian is then written as [14,20,23] where we omitted the SM Lagrangians for the quark fields q L and t R . Note that the CCWZ covariant objects e a µ include the SM gauge fields: and we have written the formulae in terms of SM Higgs doublet H (see appendix A for the definition and derivation). Note that the SM gauge interactions don't preserve the nonlinearly realized SO(5) symmetry and provide the explicit breaking, thus will contribute to the Higgs potential at one-loop level. The term with coefficient c 1 involves the direct coupling between the ρ L and the quartet top partners at the order of g ρ L . As discussed in ref. [14], this interaction will have an important impact on the phenomenology of ρ L especially when m ρ L > 2M 4 and decaying into two top partners are allowed. In most of the case, we will choose c 1 = 1 as our benchmark point. Note that the mass term for the ρ L in eq. (B.1) will induce a linear mixing between them and the SM W µ gauge bosons before EWSB. Diagonalizing the mass matrix will lead to the partial compositeness of O(g 2 /g ρ L ) for the W bosons. As a result, the SM SU(2) L gauge coupling will be redefined as follows: and the W -mass at the LO is given by (see appendix C for detail): Due to the linear mixing, the mass of the ρ L will also be modified as follows: JHEP01(2019)157 Note that this direct mixing mass term will also lead to contribution toŜ-parameter in the low energy observable. Actually, integrating out the ρ L at the LO, we will obtain the O W operator (see ref. [14]), which leads to the contribution to theŜ parameter [92]: The ρ L resonance will be coupled to SM fermions universally with strength of O(g 2 /g ρ L ) due to the linear mixing. The non-universality comes from the linear mixing between the SM fermions and corresponding composite partners. Since the mixing is the source of the SM fermion masses after EWSB, it is roughly the order of the fermion Yukawa couplings. Thus we expect that only the third generation mixings (especially the top quark) have the important impact on phenomenology of the ρ L , which is the reason we only focus on the top sector. For the partially composite right-handed top quark scenario, we have two parameters y L , y R controlling the mixing between q L , t (P) R and the top partner Ψ 4 . Similar to the SM gauge bosons, there will be direct mixing between q L and the composite SU(2) L doublet Q before EWSB proportional to y L : .20). This motives us to define a left-handed mixing angle θ L as follows: which measures the partial compositeness of the SM fermions q L . Due to the linear mixing, the mass formulae for the fermionic resonances before EWSB are given by: Note that y L breaks the SO(4) explicitly and will contribute to theT parameter at the loop level, thus can't be too large. In contrast, t R is an SO(4) singlet so that y R term preserves the custodial symmetry can in principle can be large [111]. For the fully composite t (F) R , besides the mixing between q L and Ψ 4 (denoted also as y L ), we can write a direct coupling y 2L between q L and t (F) R . This term provides the main source of top quark mass. Since t (F) R belongs to the strong sector, there are also direct interactions between it and the composite resonances, which are written as the c 2 term in the L F 4 . As discussed in ref. [20], this strong interaction term provides the dominant contribution to decay of the top partners, especially when the mixing parameters are small. Note that it will be very useful to rewrite the Lagrangian in terms of SM SU(2) L ×U(1) Y notation, where the SM gauge symmetries are manifest. By using the formulae of the Goldstone matrix U and the d µ , e µ in the appendix A, we can write the Lagrangian L L 4 -34 -JHEP01(2019)157 using the doublet notation as follows: where the · · · denotes the higher order terms in H † H/f 2 and we have defined the O(1) parameter a ρ L as in ref. [23]: From the dimension-six operators involving the top partners and the Higgs fields, we can see that generally the gauge couplings of the top partners are modified at the O(ξ) after EWSB. Note that there is an accidental parity symmetry P LR in the kinetic Lagrangian for the quartet top partner defined as [112]: 15) and the couplings between eigenstates of this parity (X 5/3 , B) and the SM Z gauge bosons will not obtain any modification after EWSB. This can be easily seen by using the formulae for the currents in the vacuum: remembering that T 3 L (X 5/3 ) = T 3 R (X 5/3 ) = 1/2 and T 3 L (B) = T 3 R (B) = −1/2. This is important because ZB LBL are not modified by the Higgs VEV means that after the mixing between b L and B L , the Zb LbL remains the same as the SM canonical couplings. 8 Similarly, we can write the elementary-composite mixing Lagrangian L P 4 in the doublet notation: (B.17) where we only keep the leading terms in the expansion of H † H/f 2 . We can see clearly that after EWSB only the mass matrix in the top sector obtains corrections of O(y L f ξ, y R v), JHEP01(2019)157 while for the charge −1/3 and charge-5/3 resonances, their mass formulae are not modified. 9 After EWSB, the top mass is given by: where s θ L denotes sin θ L defined in eq. (B.11). The EWPT at the LEP prefers y L y R , thus y R mixing term is dominant. In the unitary gauge, this term becomes: So in the large y R limit, there will be a top partner (the heavier one) in the mass eigenstate, which will primarily decay into th and the other one will primarily decay into tZ. See appendix C for detail, where we summarize the mass matrices and mass formulae. As we will discuss below, in our consideration, we will focus on the region y R 1, this effect will not be manifest. For the fully composite t (F) R case, we have: . The top mass to the leading order is given by: where c θ L denotes cos θ L defined in eq. (B.11). So that the top Yukawa coupling is mainly determined by y 2L , which is different with partially composite t For the ρ R models, the effective Lagrangians read: where the definition of ρ a R µν is the same as in eq. (B.2) with (L → R). The effective Lagrangians in models RP(F) 4 are given by: Since we don't include the right-handed bottom quark mixings with bottom partners, the bottom quark remains massless. JHEP01(2019)157 where the Lagrangians L P(F) 4 are the same as in eq. (B.1). In terms of doublet notation, we have: where we only show the terms involving the ρ R and defined: Note that similar with ρ L , there is a direct mixing between ρ 3R µ and the hypercharge field B µ . So the U(1) Y gauge coupling is redefined as follows: and the Z-mass to the LO is given by: Note that this direct mixing mass term will also lead to contribution toŜ-parameter in the low energy observable: integrating out the ρ R will result in the O B operator and (B.28) As can been seen from eq. (B.24), for the neutral resonance ρ 3 R , it has the universal coupling of O(g 2 /g ρ R ) to the SM fermions, while for the charged ρ R , its coupling arise from O(ξ). This makes ρ 0 R more produced at the LHC than the charged one and thus the most stringent constraint on the ρ R models comes from the neutral spin-1 resonance searches. Because of the smallness of U(1) Y gauge coupling g compared with SU(2) L gauge coupling g, its constraints are weaker than ρ L . For the direct interactions with the fermionic resonances (the c 1 term), they are similar to the ρ L interactions except that the charged currents are between Q and Q X . For the models involving the ρ X and the quartet Ψ 4 , the Lagrangian containing the ρ X are given by: JHEP01(2019)157 where ρ Xµν = ∂ µ ρ Xν − ∂ ν ρ Xµ , and where the Lagrangians L P(F) 4 are the same as in eq. (B.1). Similar to ρ 3 R µ , ρ Xµ is mixing with the hypercharge gauge field B µ , thus will have a universal coupling of O(g 2 /g ρ X ) to the SM elementary fermions. The U(1) Y gauge coupling g is redefined as: (B.31) Similar to the case of ρ L,R , we will also define the O(1) parameter a ρ X as follows: ρ X will not contribute toŜ-parameter because of its singlet nature, but will contribute to the Y -parameter (defined in ref. [92]) as follows: The extra suppression factor (g /g ρ X ) 2 will make the constraint on the mass of the ρ X from EWPT much weaker than ρ L,R . For the case of fully composite right-handed top quark, a direct interaction term between ρ X and t (F) R can be written down. The coefficient is denoted as c 1 in eq. (B.30). This term is special in the sense that it can affect the decay of ρ X and also can lead to a new production mechanism of ρ X : tt fusion. The decay of ρ X into a pair of top quark will result in four top final states, which can be probed using the SSDL final state [102]. Finally, we consider the models involving ρ X and the singlet Ψ 1 . The Lagrangian involving the heavy resoances read: The mixing term is given by: and the effective Lagrangians in models XP(F) 1 are: JHEP01(2019)157 Note that here besides the c 1 term, we also have the non-diagonalized interaction, i.e. the c 1 term. The mixing term between the elementary SM quarks and the composite fields can be rewritten in terms of doublet notation. The results read: For the model XP 1 , the linear mixing term between t (P) R and the singlet T will lead to the partial compositeness of the right-handed top quark with mixing angle θ R : The top partner mass and the top mass will become: For the fully composite t (F) R , the top mass is simply: In both XP 1 and XF 1 models, the y L mixing term controls the top partner T decay, as this is the leading term with trilinear interactions violating the top partner fermion number. By using the Goldstone equivalence theorem, we can easily see the following branching ratios for the decay of the singlet T : where the factor 2 in the branching ratios comes from the √ 2 suppression of the real scalar fields compared with complex scalar fields. C The mass matrices and the mass eigenstates Before EWSB, the mixing between the composite resonances and SM particles can be easily and exactly solved, as stated in appendix B of this paper. However, after EWSB, i.e. h = (0, 0, 0, h ) T , all particles with the same electric charge and spin will be generally mixed, and it is impossible to analytically resolve the mixing matrices exactly. In this section, we list all mass matrices after EWSB, and use perturbation method to derive the mass eigenvalues up to ξ = v 2 /f 2 level. C.1 The spin-1 resonances Due to the SM gauge quantum number, ρ a L L mixes with W a L , while ρ 3 R R and ρ 0 X mix with B before EWSB, and the mixing angles are determined by tan θ ρ = g SM /g ρ . The VEV of Higgs will provide O(ξ) modifications to such pictures. Below, we will give the mass eigenvalues up to ξ level for the vector bosons. and By using ξ as the expanding parameter, we can diagonalize above matrices perturbatively. Up to ξ order, the mass eigenvalues of the SM gauge bosons are and the photon is massless, due to the residual electromagnetic gauge invariance. Note that theT -parameter is 0, as expected. For the spin-1 resonances, the mass eigenvalues are (C.5) C.1.2 The ρ R (1, 3) resonance We can obtain the mass terms from the Lagrangian as follows: where -40 - JHEP01(2019)157 and (C.8) The masses eigenvalues can be derived as the series of ξ, and we list the terms up to ξ order here. For SM gauge bosons, the results are and the photon is massless. For the composite vector resonances, the results are M 2 ρ ± R = m 2 ρ R , and (C.10) C.1.3 The ρ X (1, 1) resonance For the ρ X , the mass matrices read: The W ± 's are already mass eigenstates because there are no charged vector bosons mixing with them. Up to ξ order, the SM gauge bosons have the same mass eigenvalues as eq. (C.9), while the ρ 0 X has mass (C.13) C.2 The fermionic resonances In this section, we consider the SO(4) quartet and singlet spin-1/2 resonances, and for each case we discuss both the partially and fully composite t R scenarios. The X 5/3 does not mix with any particles in SM, because of its exotic charge. In the quartet case, the mixing between b L and B L is not affected by the EWSB and has been exactly solved in appendix B; while in the singlet case, b L quark has no mixing in the unitary gauge (in our massless b approximation). Below we just discuss the mass matrices of charge-2/3 fermions. where the mass matrices are (C. 17) In this scenario, the lightest charge-2/3 top partner X 2/3 has degenerate mass with X 5/3 up to ξ order. C.2.2 The Ψ 1 (1, 1) resonance The fermion mass term is Singular value decomposition is used to find the mass eigenvalue, and up to ξ order for P 1 , and for F 1 , D The NNLO cross sections for QCD pair production of the top partners In this appendix, we list the cross section for the QCD pair production of the top parters. They are calculated using Top++2.0 package, at NNLO level with next-to-next-to-leading logarithmic soft-gluon resummation [26][27][28][29][30][31]. The results are shown in table 8. E The extrapolating method In this appendix, we sketch the method we used to extrapolate the existing searches to the future high luminosity or high energy LHC. We refer the reader to ref. [43] for the detailed description of the method. The basic assumption of the method is that the same number of background events in the signal region of two searches with different luminosity and collider energy will result in the same upper limit on the number of signal events. To be specific, from an existing resonance search at collider energy √ s 0 with integrated luminosity L 0 , we can obtain the 95% CL upper limit on the σ × Br for a given channel for the mass m 0 ρ , which is denoted as [σ where dL ij /dŝ is the parton luminosity defined as [43,113]: We have chosen the factorization scale to be the partonic center-of-mass energy √ŝ . Note that if the signal and the main background come from the same parton initial states, the method is the same as in ref. [114]. For the QCD pair production of top partners, we have chosen an invariance mass square window around (2M F ) 2 , where M F is the mass of the top partner under consideration. This adjustment makes use of the fact that the heavy fermion pair is mainly produced at threshold. For single production (e.g. tW or tZ fusion) of fermion resonance, although there is no invariance mass peak in such channels, we still use extrapolation method in the invariance mass square at (M F + M t ) 2 to set an estimate limit. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
19,745.6
2019-01-01T00:00:00.000
[ "Physics" ]
Offline Memory Reprocessing: Involvement of the Brain's Default Network in Spontaneous Thought Processes Background Spontaneous thought processes (STPs), also called daydreaming or mind-wandering, occur ubiquitously in daily life. However, the functional significance of STPs remains largely unknown. Methodology/Principal Finding Using functional magnetic resonance imaging (fMRI), we first identified an STPs-network whose activity was positively correlated with the subjects' tendency of having STPs during a task-free state. The STPs-network was then found to be strongly associated with the default network, which has previously been established as being active during the task-free state. Interestingly, we found that offline reprocessing of previously memorized information further increased the activity of the STPs-network regions, although during a state with less STPs. In addition, we found that the STPs-network kept a dynamic balance between functional integration and functional separation among its component regions to execute offline memory reprocessing in STPs. Conclusion/Significance These findings strengthen a view that offline memory reprocessing and STPs share the brain's default network, and thus implicate that offline memory reprocessing may be a predetermined function of STPs. This supports the perspective that memory can be consolidated and modified during STPs, and thus gives rise to a dynamic behavior dependent on both previous external and internal experiences. Introduction An active internal mental life is a common human experience. Neural imaging studies have found that the human brain is highly active even when it is not engaged in specific tasks [1]. Spontaneous thought processes (STPs), also referred to as daydreaming [2,3] or mind-wandering [4], occur frequently in the absence of, or along with low levels of, demand from a task [5]. Since STPs are ubiquitous in everyday life, researchers have suggested that they constitute a psychological baseline for human brain functions [6,7]. Despite their prevalence, the functional significance of STPs remains largely unknown. Some studies have suggested that STPs enable individuals to maintain an optimal level of arousal [6,8]. However, the contents of STPs include everything from mundane recounts of recent happenings to plans and expectations about the future [4,[9][10][11][12], causing us to hypothesize a more complex role with additional functions to that mentioned above. In the present study, we hypothesized that offline memory reprocessing is an important underlying process of STPs. Offline memory reprocessing has been used to refer to the process during which the brain cuts out normal input from the outside world, and looks for older memories that are relevant to memories from the recent past to see if the older memories can be usefully linked to the newer ones [13][14][15]. Through offline memory reprocessing, the brain accomplishes memory consolidation [13][14][15] and memory revision [16,17]. Depending on daily experiences, daydreaming/mind-wandering usually occurs in the context of memory [16,17]. In addition, although the memorization of events or facts is believed to mainly involve the hippocampus and associated cortices [18,19], recent studies have found that offline memory reprocessing usually activates the precuneus and some ventrolateral and dorsolateral prefrontal regions [20][21][22]. These regions belong to an active default network during the task-free state [1,8,23], in which STPs are most likely to occur [4,6,7]. Therefore, we suspect that offline memory reprocessing, which is thought to mainly take place during sleep [13][14][15], may also happen automatically during STPs. To test this hypothesis, we used spontaneous activity and functional connectivity MRI to explore the relationship between offline memory reprocessing and STPs. In this study, participants were first instructed to maintain a natural resting state (N-Rest) during a six-minute fMRI scan by closing their eyes, relaxing their minds and stopping any structured mental processes such as counting. About one hour after the N-Rest scans, those subjects were asked to memorize details of a picture for about ten minutes, and a second scan was performed after the memory task. During the second six-minute scan, the subjects were requested to maintain a state by closing their eyes and recalling the details of the picture. For each subject, we did a thought query immediately after the scan. According to their reports, they were mainly involved in the following cognitive processes: (1) recalling the details of the picture (contents, color, shape and so on); (2) picturerelated imagination; (3) picture-related processing: they reported that the retrieved picture contents seems to change during the scanning period; (4) picture-unrelated STPs. The subjects spent most of time (85610%, mean6SD, data can be seen in Table S1 of Supplementary Materials) on the offline reprocessing of memorized information (included all those picture-related processes). Therefore, we named the second state as offline memory reprocessing (M-Reprocess) state. We first identified the neural network associated with STPs (''STPs-network''), and then investigated the modulations of activities and interactions within the STPs-network by offline memory reprocessing. The results highlight the relationship between STPs and offline memory reprocessing, and offer convergent evidence for our hypothesis that offline memory reprocessing is a possible additional function of STPs. Results In this study, we used regional homogeneity (ReHo) to measure the spontaneous activity level [24]. The ReHo method is based on the assumption that the activities of voxels within a functional brain area are more temporally synchronous in a high state of activity than in a low state. In the paper in which we described the ReHo methodology [24], we demonstrated that motor-related brain regions showed a higher ReHo value during a motor task than in the resting state. Additionally, the posterior cingulate cortex, a critical node in the default mode network [23,25], showed a higher ReHo value in the resting state than in the task state. These findings suggest that ReHo values, although not directly measuring neuronal activity, can be used to assess brain activity level. As shown in Figure 1, visual inspection indicated a high ReHoreflected activity in the posterior cingulate cortex, the precuneus, the medial prefrontal cortex, bilateral inferior parietal cortex and bilateral dorsal lateral prefrontal cortex during the N-Rest period. Such a network is consistent with the brain's default network, which has been found to be active during the resting state [23,25]. This result further indicates that the ReHo value can be used as an index of brain activity level. Moreover, the activity pattern during the M-Reprocess appears similar to that of the N-Rest, possibly suggesting that offline memory reprocessing, the dominant process during the M-Reprocess, is also an important process during the N-Rest. In the present study, we quantified the subjects' tendencies to have STPs by measuring their daydreaming frequencies using the Imaginal Process Inventory (IPI) [26] (details of their scores can be seen in Table S1 and Figure 2). It should be noted that we can not measure the precise amount of STPs during the scanning. However, it has been suggested that individuals exhibit individually consistent differences in their propensity to daydream/mind- wander [3]. Therefore, the IPI-measured STPs frequencies could be used to evaluate the differences across subjects. We speculated that if a subject tends to have more STPs, the STPs-network regions would have a stronger activity during the N-Rest. Therefore, we first performed a voxel-wise correlation analysis between individuals' STPs frequencies and their ReHo-reflected activities during N-Rest. We then converted the resultant network (with a threshold of P,0.05 for individual voxels and cluster size .5 voxels; network details can be seen in Figure S1 of Supplementary materials) into a binary image and used it as an ''inclusive'' mask in the subsequent analysis. Next, to test whether the activity of the STPs-network regions could be modulated by externally evoked offline memory reprocessing, a random-effect paired-t test was performed between the M-Reprocess and the N-Rest (with a combined threshold of P,0.05 for individual voxels and a cluster size .216 mm 3 , equal to 8 voxels. This yield a corrected threshold of P,0.05 for multiple comparisons within the exclusive mask, as determined by a Monte Carlo simulation using the AFNI AlphaSim program [27]). Our results indicated that the STPs-network regions, including the left precuneus (PCu, BA 7), the left angular gyrus/superior occipital gyrus (AG/SOG, BA 39/19), the left inferior parietal lobule (IPL, BA 40), the medial prefrontal gyrus (mPFG, BA 8/6) and the left hippocampus/parahippocampus (HIP/PHIP), showed significant correlations with the subjects' mind-wandering frequencies ( Figure 2). These STPs-network regions also showed significantly stronger ReHo-reflected activity in the M-Reprocess than in the N-Rest. No region in the STPs-network showed significantly lower ReHo-reflected activity in the M-Reprocess than in the N-Rest. In addition, we investigated whether offline memory reprocessing could modulate the interactions between the STPs-network regions. As shown in Figure 3, the functional connectivities associated with the HIP/PHIP were significantly weaker than other functional connectivities, both in the N-Rest (P,10 29 ) and in M-Reprocess (P,10 210 ). In addition, the functional connectivity between the PCu and the mPFG was significantly stronger (P,0.005) during the M-Reprocess than during the N-Rest. The functional connectivity between the HIP/PHIP and the PCu was significantly stronger (P,0.05) during the N-Rest than during the M-Reprocess. Other functional connectivities were not significantly different between the N-Rest and the M-Reprocess. Discussion In the present study, we used ReHo-reflected activity to measure brain spontaneous activity during the N-Rest and the M-Reprocess. Our results indicate that the STPs-network regions, including the PCu, the AG/SOG, the IPL, the mPFG and the HIP/PHIP showed significant correlations with frequency of the subjects' mind-wandering during the N-Rest. That is, when subjects had more STPs, these regions showed higher ReHoreflected activity in order to complete such STPs. These STPsnetwork regions appear to be located in the default network [23,25,28]. Therefore, this finding is highly consistent with previous findings that the activity in the default network regions was associated with STPs [8]. We also found that these regions also showed large ReHo-reflected activity in the M-Reprocess. This result is compatible with previous findings that the HIP/ PHIP, the mPFG, the IPL and the PCu are associated with offline memory reprocessing [20][21][22]29,30] and that the SOG is associated with visual-related memory processes [31][32][33]. We have found that the activities of those regions in the N-rest are positively correlated with the frequency of mind-wandering in the subjects. According to self-reports provided by the subjects immediately after the scanning, all the subjects had less mindwandering during the second scan than during the first one (details can be seen in Table S1 of Supplementary Materials). If offline memory reprocessing were not related to STPs, a lower ReHoreflected activity in those STPs-network regions in the M-Reprocess would be expected because of the lower level of STPs. However, we found that the M-Reprocess further increased the amplitudes of the ReHo-reflected activity in STPs-network regions. Taken together, these results suggest that offline memory reprocessing is an important process in STPs. Therefore, more offline memory reprocessing during the M-Reprocess period than during the N-Rest time will trigger a higher ReHo-reflected activity in the STPs-network regions. It has been suggested that the amplitude of spontaneous fluctuations (ASF) could also partially indicate the level of regional spontaneous neuronal activity [34,35]. To further clarify the relationship between spontaneous activity in the default network regions and STPs, we analyzed the ASF in these regions using a power spectrum method. For a given voxel, we first transformed the time series into the frequency domain using a fast Fourier transform and obtained its power spectrum. Because the power at a given frequency is proportional to the square of the amplitude of this frequency component in the original time series, we calculated the square root of the power spectrum at each frequency, and then averaged them across the frequency range to get the ASF value. After calculating the ASF values at all voxels, we also scaled the ASF map by dividing by its mean average value over the entire brain. By investigating the averaged ASF in these STP-network regions, we found that the ASFs in the STPs-network regions were also significantly correlated with the frequencies of the subjects' STPs frequencies ( Figure S2 in Supplementary materials). This result further indicates that the spontaneous activity in these regions is modulated by STPs. In addition, the averaged power spectra in the low frequency range were stronger in the PCu, the mPFG and the IPL during the M-Reprocess than during the N-Rest, and were similar in the AG/SOG and HIP/PHIP between the N-Rest and the M-Reprocess ( Figure S3 in Supplementary materials). These results indicate that the STP-network regions tended to have a larger signal strength when a subject is involved in STPs during the N-Rest, but showed a similar or even larger signal strength during the M-Reprocess state with a lower level of mind-wandering. Therefore, these results further support our hypothesis that offline memory reprocessing plays an important role in STPs. From the functional connectivity analysis, we found that most of the functional connectivities between the various STPs-network regions were not significantly different between the N-Rest and the M-Reprocess. This result suggests that offline memory reprocessing and STPs can similarly modulate the interactions within the STPsnetwork. The interactions among the STPs-network regions reflect a functional integration that may facilitate the retrieval and integration of relevant information components [36]. Interestingly, we found that the functional connectivities associated with the HIP/PHIP tended to be weaker than other functional connectivities in both the N-Rest and in the M-Reprocess. This result suggests that different STPs-network regions may contribute specialized functions that are organized into subsystems. In fact, previous studies have revealed that the brain's default network is comprised of at least two distinct, interacting subsystems: One is the medial temporal subsystem associated with the HIP/PHIP, and the other is a subsystem associated with the mPFG (for a review, see [37]). This is consistent with our finding that the HIP/PHIP and other STPs-network regions are not highly correlated with each other. An explanation for such functional separation is that the HIP/PHIP is associated with receiving memory-related information and comparing/combining distinct memory representations [18], whereas the PCu, the IPL, the AG and the mPFG have been suggested as being associated with high-level offline memory processes, such as self-referential thought, or time-sequencing/ organizing of recalled information [38][39][40][41]. Compared with the N-Rest, the functional connectivity between the PCu and the mPFG were significantly stronger during the M-Reprocess, suggesting the strengthened interaction between the two regions. However, the functional connectivity between the HIP/PHIP and the PCu showed a tendency to be weaker (P,0.05, uncorrected) during the M-Reprocess than during the N-Rest. Our ReHo analysis has found that both the HIP/PHIP and the PCu showed increased ReHo-reflected activity during the M-Reprocess than the N-Rest (Figure 2). Therefore, these results indicate that although both the HIP/PHIP and the PCu were more involved in the M-Reprocess, their activities showed a tendency to be de-coherent during the higher demanding state, further suggesting their different roles in the offline memory reprocessing. Taken together, the execution of offline memory reprocessing by the STPs-network would be expected to be a combined effect of functional integration and functional separation among its components. Of course, the low statistical significance makes it difficult to give a strong interpretation for this result. Future studies are still needed to give a picture for the working mechanism of the STPs-network in offline memory reprocessing. It should be noted that our experimental design could not exclude the possibility completely that the results may be influenced by the passage of time. As the M-Reprocess always occurred after the N-Rest for each participant, it is likely that some confounding factors, such as the difference in the anxiety about the scanner environment, were involved in the main effect of the differences between the two states. Future studies are needed to further address this issue. For example, the inclusions of an appropriate 'control' state during which the subjects performed a continuous demanding task would provide a helpful reference for our results. In addition, the inclusion of another group with a different experimental design, such as the N-Rest followed by another N-Rest, could also help to resolve this issue. Of course, the results of an independent experiment will offer stronger evidence for the present study. Overall, the present study provides a new insight into the functional significance of STPs. Mental processes can be evoked by either external stimuli or internal dynamics occurring spontaneously without a specific task. Although these two kinds of processes are different phenomena, they may share similar underlying mechanisms [42]. Our results suggest that offline memory reprocessing and STPs share a similar neural network, and that offline memory reprocessing is a possible function of STPs. From a physiological viewpoint, offline memory reprocessing plays an important role in the waking-state memory consolidation and memory revision which is necessary for the permanent storage of memory [43,44]. In addition, considering the similarity between the present STPs-network and the core brain system that mediates past and future thinking [45], it is also possible that offline memory reprocessing plays an important role in combining information from the past and the present to generate predictions about the future [45,46]. Taken together, it seems reasonable to speculate that the brain automatically wanders, if not required by external tasks, for the offline replaying of encoded information to further enable the consolidation or modification of memory [47]. This is consistent with the perspective that memory can be modified during STPs, and thus gives rise to a dynamic behavior dependent on previous experiences, both external and internal [16,17]. More generally, STPs are believed to constitute a psychological baseline for human brain functions, during which some other intrinsic mental processes, such as mental imagery and introspective evaluative processes, may also be involved [48][49][50]. Offline memory processing also plays a fundamental role in those intrinsic processes [42]. Therefore, the default network of STPs may be the common basis for all those intrinsic mental processes. During a deliberate goal-directed task state, STPs are suspended [1,8,23]; but when external demands are low, STPs automatically occur, driven by internal dynamics. In other words, rather than letting time pass with an idle brain, there is a spontaneous switch between externally-driven and internally-driven processes, which highly increases the effectiveness of the brain. Subjects This study was approved by the Human Research Ethics Committee of Xuanwu Hospital of Capital Medical University. Thirteen subjects (7 female, 6 male; mean age 25.3 years, range 23-28 years) participated in the study. Written informed consent was obtained from all subjects. All participants were right-handed [51] and had no history of neurological or psychiatric disorders. Twelve of the thirteen subjects completed a 12-item daydream frequency scale of the Imaginal Process of Inventory [26] to measure each individuals' tendency to mind-wander. Only these twelve subjects were used in the subsequent analysis. Data acquisition All images were scanned on a 3.0 Tesla Siemens MR system. A foam pad and headphones were used to reduce head motion and scanner noise. Blood oxygen level dependent (BOLD) images of the entire brain were acquired in 32 axial slices by using an echoplanar imaging sequence [TR/TE = 2000/30 ms, flip angle (FA) = 90u, field of view (FOV) = 22 cm, matrix = 64664, thickness = 3 mm, gap = 1 mm]. In the present study, participants were first instructed to maintain a natural resting state (N-Rest) during a six-minute fMRI scan by closing their eyes, relaxing their minds and stopping any structured mental processes such as counting. More importantly, all participants were asked to stay awake during the examinations. The scanning lasted for 6 minutes and 180 BOLD images were acquired. About one hour after the first scan (mean: 1 hour, range: 45-75 minutes), each subject was asked to memorize a landscape picture for about 10 minutes. The picture was selected from the International Affective Picture System (IAPS). Immediately after this procedure, the subject was scanned a second time using the same parameters as above. During the second scan, subjects were instructed to close their eyes and recall the content of the picture as much as possible (offline memory reprocessing state, M-Reprocess). This M-Reprocess scanning procedure also lasted for 6 minutes and 180 BOLD images were acquired. We made the experiment design based on the following considerations: (1) the N-Rest is a task-free state, and the effect of STPs is easy to be identified; (2) we hypothesized that the offline memory reprocessing is an important function of STPs, and the M-Reprocess mainly involve the offline memory reprocessing. According to our pre-experiment, if the M-reprocess was taken before the N-Rest, the subjects could not control retrieving of the memorized information during the N-Rest. In this situation, it will be difficult to discriminate the effect of natural STPs from the goaldirected memory reprocessing processes during the N-Rest. However, the N-Rest is a natural resting state, and the subjects reported no particular influence on the latter M-Reprocess state. Data preprocessing The same preprocessing procedures were used for both the scanning datasets. Most of the steps were carried out using statistical parametric mapping (SPM2, http://www.fil.ion.ucl.ac.uk/spm/). Because of the instability of the initial signal and the subjects' adaptation to the situation, the first 10 images were discarded. The remaining images were first corrected for within-scan acquisition time differences between slices and then realigned to the first volume to correct for inter-scan head motions. Next, we spatially normalized the realigned images to the standard EPI template and re-sampled them to a voxel size of 36363 mm 3 . Subsequently, we used a multiple regression procedure to remove other possible resources of artifacts [28]: (1) six motion parameters, (2) linear drift, and (3) the drift of whole brain signals averaged over the entire brain. Regional homogeneity (ReHo) Analysis For a given voxel, the ReHo approach calculates the Kendall's coefficient of concordance (KCC) [52] of the time series of this voxel with those of its nearest neighbors: where W ranges from 0 to 1; R i~P k j~1 r ij , where r ij is the rank of the i-th time point in the j-th voxel; R~nz1 ð Þk=2 is the mean of the R i ; n is the number of time points in each voxel time series (here n = 170); and k is the number of time series within the measured cluster (here k = 27, the central voxel plus its 26 neighbors). An individual W map (i.e. ReHo map) was obtained on a voxel-by-voxel basis using the data from each subject. In the present study, we first calculated the ReHo-reflected activity map on a voxel-by-voxel basis for each subject, in both the N-Rest and the M-Reprocess. Then we standardized each ReHo map by dividing by its mean average value over the entire brain. This procedure is similar to that used in PET studies [23] and provides standard ReHo maps with a mean value of 1. Functional connectivity Analysis Recently, fMRI studies have demonstrated that spontaneous BOLD fluctuations are coherent within specific neuro-anatomical systems [25,28,34,[53][54][55][56][57][58][59]. Based on the conventional functional connectivity methodology [34], we calculated the functional connectivity among the above acquired STPs-network regions in both the N-Rest and the M-Reprocess to evaluate their interactions with each other as follows: (1) the reference time series of each region was calculated by averaging the time series across all the voxels in that region; (2) Pearson's correlation coefficients were calculated between each pair of reference time series; (3) Fisher's r-to-z transformation was applied to improve the normality of these correlation coefficients. To investigate whether offline memory reprocessing could modulate the interactions within the STPs-network, these z values were then entered into a random-effect paired-t test to determine the functional connectivities that were significantly different between the N-Rest and the M-Reprocess. Figure S1 Brain regions whose ReHo-reflected activity was significantly correlated with subjects' daydreaming/mind-wandering frequencies (total score = 60) during the N-Rest (P,0.05, t.1.82 for individual voxels; and cluster size .5 voxels). It can be seen that the core regions associated with the brain's default network, including the ventral and dorsal medial prefrontal gyrus (mPFG), the posterior cingulate cortex (PCC), the precuneus (PCu), the inferior parietal lobule (IPL), the angular gyrus (AG), the superior occipital gyrus (SOG), the lateral temporal cortex (LTC), and the hippocampus/parahippocampus (HIP/PHIP), showed significant correlations with the subjects' spontaneous thought processes.
5,572
2009-03-17T00:00:00.000
[ "Psychology", "Biology" ]
A Versatile, Linear Complexity Algorithm for Flow Routing in Topographies with Depressions . We present a new algorithm for solving the common problem of flow trapped in closed depressions within digital elevation models, as encountered in many applications relying on flow routing. Unlike other approaches (e.g., the so-called "Priority-Flood" depression filling algorithm), this solution is based on the explicit computation of the flow paths both within and across the depressions through the construction of a graph connecting together all adjacent drainage basins. Although this represents many operations, a linear time-complexity can be reached for the whole computation, making it very efficient. 5 Compared to the most optimized solutions proposed so far, we show that this algorithm of flow path enforcement yields the best performance when used in landscape evolution models. Besides its efficiency, our proposed method has also the advantage of letting the user choose among different strategies of flow path enforcement within the depressions (i.e., filling vs. carving). Furthermore, the computed graph of basins is a generic structure that has the potential to be reused for solving other problems as well , such as the simulation of erosion. This sequential algorithm may be helpful for those who need to, e.g., process digital 10 elevation models of moderate size on single computers or run batches of simulations as part of an inference study. Introduction Finding flow paths on a topographic surface represented as a Digital Elevation Model (DEM) is a very common task that is required by many applications in domains such as hydrology, geomorphometry, soil erosion and landscape evolution modeling, and for which various algorithms have been proposed either for gridded DEMs (e.g., O'Callaghan and Mark, 1984;Jenson and Domingue, 1988;Quinn et al., 1991;Tarboton, 1997) or unstructured meshes (e.g., Jones et al., 1990;Banninger, 2007). Closed depressions may arise in DEMs because they are real topographic features or result from interpolation error during DEM generation or its lack of resolution.These spurious local minima need to be resolved because they disrupt flow routing, produce hydrologically unrealistic results or introduce artificial singularities that may result from a sudden, unrealistic jump 1 is usually considered as more realistic, especially under temperate or humid climates. Although not having a linear time complexity, the most recent algorithms of depression removal -e.g., the "Priority-Flood" algorithm and its variants (Barnes et al., 2014a;Zhou et al., 2016;Wei et al., 2018) -have been optimized so that they can be used efficiently on large datasets.To increase performance for very large datasets, further optimization efforts have been 5 focused primarily on rather complex, parallel variants of these algorithms (Barnes, 2016;Zhou et al., 2017). Yet, in some applications flow path enforcement still remains the main bottleneck.This is for example the case in many Landscape Evolution Models (LEMs) simulating an evolving topography (see Tucker and Hancock, 2010, for a review) and that rely on flow routing to compute erosion rates.To produce realistic results, flow path enforcement is often applied many times, i.e., at each simulation time step (Figure 1), even when this eventually becomes irrelevant as the modelled erosional processes 10 usually tend to remove depressions rather than deepen or add new ones (Braun and Willett, 2013).Furthermore, LEMs are also used as forward models in sensitivity analyses and/or inferences on the parameters that control erosional processes, which often require running a large number of models to adequately explore the parameter space.Parallel flow routing and hydrological correction algorithms don't help much here, as grid-search and/or sampling methods (e.g., Sambridge, 1999) are generally easier to implement and more effective to execute in parallel.Highly optimized, sequential algorithms are still needed in this case. We have developed a new method of flow enforcement that is based on the explicit building of a graph of drainage basins (possibly encompassing depressions) and the computation of the flow paths both within inner and across those basins.This idea was first introduced in a Computer Graphics implementation of the Stream Power Law (Cordonnier et al., 2016), but with a sub-optimal complexity.Although this approach may appear naive at first glance, we have improved it by using fast algorithms of linear complexity at each step of the procedure, which now makes the whole computation very efficient.Not only this method enable the use of a wide range of techniques of flow enforcement within the closed depressions (e.g., depression filling, channel carving or more advanced techniques); but it also provides generic data structures that could potentially be reused for solving other problems like modeling the behavior of erosion/deposition processes within those depressions. After a detailed presentation of the different steps of the method, we will show in the sections below through some results how our algorithm behaves and performs compared to existing solutions of flow path enforcement.We will finally discuss the assets and limitations of our method, with some focus on landscape evolution modeling applications. Algorithm The input of the algorithm is a topography T = (N , E), where N is a set of nodes and E is a set of edges that link pairs of neighbor nodes.A node n is given a horizontal position p n and a vertical elevation z n .A topography may for example result from a triangulation or correspond to a regular grid of 4-connectivity (i.e, four neighbors per nodes) or 8-connectivity (i.e., also including diagonal neighbors).We follow the conventions of Braun and Willett (2013) to define flow paths on the topography: each node n is given (1) a single flow receiver, rcv(n), which corresponds to the one of its (strictly) downslope neighbors having the steepest slope, and (2) a set of flow donors, Donors(n), which is a subset of the neighbors of n and is defined as Donors(n) = {k ∈ Nb(n), s.t.rcv(k) = n}.We set rcv(n) = ∅ when n is a singular node: it either corresponds to a user-defined boundary node (e.g., a node on the domain boundary) or a local minimum in the topography, i.e., a node inside the domain where all of its neighbors have a higher elevation, and either correspond to a pit or a flat-bottomed depression in Lindsay (2016) terminology. We propose an algorithm that updates the receivers of a subset of N such that the flow is never trapped in local minima.This algorithm primarily aims at resolving local minima in the context of flow routing and thus leaves the elevation of the nodes unchanged.Hence it breaks the previously introduced definition of a flow receiver: the new receivers assigned by the algorithm generally produce some localized "upslope flow".While this seems unnatural and may not be wanted, the data structures used by the algorithm provide enough information to efficiently address this issue later depending on the application, which is beyond the scope of this work.Still, the algorithm ensures that the updated flow routing stays consistent across the whole topography by respecting the following properties for each node n of the topography: 1.There exists a single boundary node b (not a local minimum), and a unique flow path from n to b.The flow path is defined as the set of nodes P = (n, rcv(n), rcv(rcv(n)), ..., b). 2. This flow path does not contains any cycle, i.e., each node appears only once in P 3. The receivers defining P are chosen such that it satisfies properties 1 and 2, and minimizes the energy E, defined as: As a first approximation, we set E i = z i (the altitude of the node).We will discuss later the special case of nodes under water level. Our method is essentially based on the computation of a graph connecting adjacent drainage basins.We define a basin as the set of all nodes that flow toward the same singular node (Figure 2 (b)).A basin is either a boundary basin or an inner basin depending on whether the singular node is a boundary node or a local minimum.To better explain the problem that we want to solve, we consider a filled topography as the result of an ideal physical process where a perfectly fluid material has been poured onto an impermeable ground and stabilized at steady state.For a node n, we define as water level (w n ) the elevation of the fluid surface, and as a spill any node s such that ∃d ∈ Donors(s)|w d = w s and z s > w rcv(s) .Note that for a flow routing observing the aforementioned properties, the water level can be computed as w n = max(w rcv(n) , z n ).We also use the term depression from Lindsay (2016) terminology, and we define it with respect to a basin B as a subset of nodes of B under water level, characterized by w n = w rcv(n) .Note that the water level of a boundary basin corresponds to the elevation of its associated boundary node so that it contains no depression.In the case of nested depressions, the water level of a basin may be higher than the elevations of all its nodes, which means that the spill does not always belong to B. The energy of the nodes should be changed to E i = w i , but as described later, one may choose various routing strategies inside the depressions depending on the application.Therefore, we allow any path within depressions by setting E i to zero inside them, and keeping E i = z i elsewhere. One may break the problem of flow path enforcement down to three, smaller problems: find the spill of each depression, force the flow within the depressions to be routed toward their respective spill, and ensure that the flow through the spills is properly routed into adjacent basins.The proposed algorithm addresses this problem in an explicit manner and can be divided into three main stages: 1. Compute the basins and link all pairs of adjacent basins (Figure 2 (b)). 2. Select only some of the basin links computed at the previous stage and orient them such that the flow is routed consistently across adjacent basins, from inner basins toward the boundary basins (Figure 2 (c)).This operation is not trivial: an optimal selection needs a global knowledge of the whole basin graph.To do so, we use an algorithmic structure: a minimum spanning tree of the basin graph.We propose here two algorithms, a simple one with O(n log n) complexity, and a more complex one with O(n) complexity. 3. Update the flow receivers.Using the links selected at the previous stage, we update (only some of) the receivers to enforce the flow both within and across inner basins so that it is ensured to finally reach the boundary basins and their associated boundary nodes.We propose three different methods (one may choose a method over another depending on 5 the specific problem to solve). Each of these stage processes the whole DEM, and as such are run only once for a given topography.They are each detailed in the next sections. Basin computation and linkage This first stage consists in first assigning a basin identifier, basin_id(n), to each node n of the topography.The identifiers are 10 added sequentially by starting at singular nodes and parsing the nodes using a depth first traversal in the direction of the donors (see appendix A1).The case of flat bottomed depressions does not require any particular treatment: all nodes within flat areas are singular nodes and therefore are each assigned a unique basin identifier. Then, the links connecting all pairs of adjacent basins are retrieved.To each link also corresponds an edge of the topography, here called a pass, which represents the crossing of lowest elevation between the two basins.For example, the link L = (B 1 , B 2 ) connects the basins B 1 and B 2 and has the corresponding Pass(L) = (n 1 , n 2 ), where n 1 ∈ B 1 and n 2 ∈ B 2 and where the chosen (n 1 , n 2 ) minimizes z pass(L) = max(z n1 , z n2 ).We define a single procedure to retrieve both the links and their pass (see appendix A2).This procedure parses each edge of the topography: if the two nodes of the current edge have each different basin identifiers, then (1) it adds a new link if no link has been already set for these two basins, and (2) it sets or maybe updates the pass of that link with the current edge. The sets of basins B and the set of retrieved links L both define a basin graph.It is worth noting that, at this stage, the links/passes are not oriented and that only one link/pass is stored for two adjacent basins.The procedure described above runs sequentially and won't add the link (B 2 , B 1 ) if it already added the link (B 1 , B 2 ). Flow routing across adjacent basins This second stage tackles the problem of selecting the right subset of links so that we obtain consistent flow paths on the basin graph.To illustrate the proposed solution, let's start from an inner basin.If it is filled with water, the water level will raise until it finds a pass where water eventually pours into another, adjacent basin.The associated link is then called the outflow of the basin.Hence, routing the flow across the basins consists in connecting all outflows such that the resulting flow paths, from inner basins to the boundary basins, have the same properties than stated above, i.e., those paths are unique, contain no cycle and minimize the energy needed to reach the boundary basins. If we add to the basin graph a virtual basin (let's call it external basin) to which we link all the boundary basins (i.e., the external basin may be viewed as a bucket collecting all the flow that leaves the domain), then we can represent the connected outflows using a specific algorithmic structure: a tree.More specifically, a basin tree is a tree that satisfies the properties above: it actually corresponds to a minimum spanning tree of the basin graph, i.e., a subset of the basin graph resulting from a selection of the links so that the following energy is minimized: Where O is the set of selected links (or the set of outflows) and z pass(L) is the elevation of their respective passes. We propose two algorithms for the computation of a minimum spanning tree.Kruskal's algorithm is very generic and simple with a log-linear complexity.We also propose a second algorithm, which leverages the planar nature of the basin graph to reach a linear complexity. Kruskal's algorithm Kruskal's algorithm (Kruskal, 1956) on the complexity of our solution.This algorithm uses a Union-Find structure to store and merge equivalence classes of objects (see Algorithm 1).The idea here is to parse all links L ∈ L sorted by increasing elevation z pass(L) , progressively grouping each pair of basins as a larger, virtual one (equivalence class).All subsequent paths between basins within this equivalence class are discarded to prevent loops.The Union-Find data structure has three operations: MakeSet Create an equivalence class containing a single element. Union Merge two equivalence classes. Find Get the equivalence class of an object. The optimal implementation of the Union-Find structure provides a O(α(N )) complexity for these operations, where N is the number of elements in the structure (i.e., here the number of basins) and α is the inverse Ackermann function whose complexity is lower than O(log N ).This however requires first sorting the links by increasing weight (i.e., by the elevation of their respective passes), which finally yields a O(m log m) complexity for the whole computation. Planar graphs The problem of finding the minimum spanning tree is known to have a O(N ) complexity when the graph is planar (Mareš, 2002).A planar graph is a graph which can be embedded in a plane such that none of its edges cross another one.The basin graph described in section 2.1 is an example of planar graph.The key intuition behind the algorithm proposed in Mareš (2002) is that at least half of the vertices of a planar graph have at most 8 neighbors.The algorithm is then an adaptation of another classical algorithm, named Boruvka's algorithm (Boruvka, 1926) ; see Algorithm 2 for more details.The O(N ) complexity comes from the fact that at each step of the outer loop, we parse and remove at least half of the nodes of the graph, and As the number of grid nodes n > N , the complexity of this algorithm is bounded by O(n).As demonstrated by Mareš (2002), the limit of 8 neighbors for the selection of a basin in the inner loop is critical in halving the number of edges at each iteration of the outer loop and thus in obtaining a linear time complexity. Algorithm 2 Planar Boruvka (Mareš, 2002) Initialize the basin tree structure while There remains nodes in the basin graph do while There is a basin B that has less than 8 neighbors do Add the link with the lowest neighboring pass to the basin tree Contract the link (if the link connects basins B and Bp, remove B and append all remaining neighbors of B to Bp) end while Clean the graph: bucket sort all links lexicographically to remove parallel edges. end while A special case may arise when the basin graph is computed from a grid of 8-connectivity.In this case, the edges of the graph may cross each other due to the diagonal connectivity, possibly making the basin graph not perfectly planar.This is, however, rather unlikely as it implies that two passes connecting different basins are found on the two diagonals connecting four adjacent nodes of the grid.Furthermore, this issue does not impact the correctness of the algorithm.Only the linear complexity is not formally proven.Because it is not planar, the case of a 8-connectivity grid would fall in the second category mentioned by Mareš (2002) of graphs closed on graph minor.We have validated this experimentally by randomly computing minors of different sized 8-connected graph.We found an edge density of 4, implying that half of the basins in the basin graph are linked to at most 16 adjacent basins (and not 8 as for planar graphs) at any step of the algorithm.Therefore, we have demonstrated the linear complexity for 8-connected graphs experimentally, although future work is needed to prove this in a formal framework. Updating flow receivers The basin tree obtained at the previous stage must be oriented before routing the flow from inner basins to the boundary basins.This is achieved by traversing the tree in the reverse order (i.e., starting from the boundary basins) and labelling the two nodes of each pass, one as n in (incoming flow) and the other one as n out (outgoing flow).Depending on their elevation, either n in or n out is the spill node of the corresponding basin. The last stage then consists in updating the flow receivers so that any flow entering an inner basin is ensured to leave the basin through n out .The most straightforward solution would be to only update the receiver of each local minimum p so that rcv(p) = n out .Note that if n in has a higher elevation than n out , then two receivers must be updated: rcv(n in ) = n out and rcv(p) = n in .This very simple solution ensures topological continuity of the flow but does not preserve its spatial continuity. We therefore propose two other, more realistic methods: one similar to depression filling and another similar to depression carving.Note that we use carving and filling as metaphors as our algorithm only changes the flow graph connectivity without altering elevation values.For each of the variants, the donors and stack structures need to be updated to reflect the changes in the receivers.Figure 1 already shows the effect of flow path enforcement vs. no enforcement on the evolution of an escarpement under active erosion processes.A second set of experiments, shown in Figure 4, illustrates the impact that the different strategies of flow receiver updating have on the evolution of the topographic surface under the action of channel erosion.The input synthetic topography is defined on a 100 × 100 regular grid and looks like an inverted pyramid with 45 • regular slopes, forming a single, big depression (Figure 4 (a)).The node at the middle of the top boundary is the only node that is not part of the depression: it has same elevation than the node at the center of the grid and it is defined as a boundary node.A single time step of 5000 years of erosion only (no uplift) is performed using each of the strategies described in section 2.3: Simple correction.In this specific case, the algorithm updates the receivers of only three nodes: (1) one of the neighbors of the boundary node, which here corresponds to the spill of the closed depression, (2) one of the neighbors of the spill that, together with the spill, forms the pass connecting the depression to the boundary node and (3) the local minimum at the bottom of the depression.The new assigned receivers are respectively for (1) the boundary node itself, (2) the spill and (3) the other node of the pass.We can see in Figure 4 Choosing one strategy over another greatly depends on the specific application.For example, the simple correction strategy may be acceptable if one assumes that no erosion could happen in depressions below the water level.However, interrupted drainage area patterns within the depressions may be problematic when used with erosion algorithms like the FastScape model, which uses an implicit time scheme for solving the Stream Power Law but still treats drainage area explicitly, resulting in too slow opening of the closed depressions by erosion.The depression carving or depression filling strategies generally yield better results in the latter case.These two strategies have, however, contrasted behaviors and choosing one or the other will depend on several criteria such as the size (i.e., depth vs. volume) of the depressions. Performances To assess the performance of our algorithm, we have run multiple benchmarks under various settings.Although these benchmarks mostly take place in the framework of landscape evolution modeling, they provide results that may be useful in other applications too.Note that for better readability, we present here only the results from benchmarks applied on a fixed grid of 16384 × 16384 nodes.We obtain consistent results for other grid sizes.(c) Execution time measured for local minima resolution at each time step, with either spatially uniform or variable uplift rates (i.e., a relative noise magnitude of either 0% or 20%).The blue curves refer to our algorithm using the O(n) variant for computing the minimum spanning tree. We have run benchmarks for our algorithm -including the two variants for computing the minimum spanning tree but considering only the depression filling strategy -as well as for three other, state-of-the-art algorithms of local minima resolution, respectively proposed by Barnes et al. (2014a), Zhou et al. (2016) and Wei et al. (2018).All three of those algorithms fill the depressions using improved variants of the "Priority-Flood" algorithm that reduce the number of nodes processed by a priority queue.The Barnes et al. (2014a) variant used here, i.e., "Priority-Flood+ ", is only slightly optimized but has the advantage number of local depressions -and thus the size of the graph -is small enough with respect to the size of the grid, resulting in only a small memory overhead compared to the Priority-Flood variants. Conclusions We have presented here a new algorithm for flow path enforcement in topographies with depressions.We have designed this algorithm within the framework of landscape evolution modeling and we have demonstrated through benchmarks that, in this scope, it may greatly improve performance compared to other, state-of-the-art solutions.The potential of this algorithm is, however, not limited to landscape evolution models.On a broader scope, the basin graph and its minimum spanning tree are generic structures that other applications may leverage, possibly through derived quantities such as the water level of each depression.We propose here optimal methods to compute those structures and quantities.Despite the fact that our algorithm is rather complex and requires some work to be properly implemented, it is designed in a composable way such that it is easy to reuse one or several of its components.Adding new features like alternative strategies of flow path enforcement within the depressions would require only little effort, too. While being versatile, this new algorithm does not provide an universal solution to the problem of flow routing both within and across closed depressions.Perhaps its main limitation is the assumption of single direction flow, i.e., each node has one unique flow receiver.Adding full support for multiple direction flow (MDF) without loosing in performance is rather difficult and would require a fair amount of re-design work at each of the three stages of the algorithm: -Basin computation should take into account divergent flow (basin labels are not unique for grid nodes located on drainage divides). -It should be theoretically possible to route the outflow from an inner basin into more than one of its adjacent basins (this is currently not possible using a minimum spanning tree computed from the basin graph). -Alternative, MDF-compliant methods should be implemented to update the flow receivers within the depressions. Other algorithms like the Priority-Flood don't have that limitation: they act directly on elevation values and don't prevent us from applying MDF flow routing methods on the modified topography. Another limitation of this algorithm is its sequential implementation.Further work is needed to adapt it so that it could be run on modern, multi-core and/or GPU-based architectures.Still, many use cases would benefit from the current implementation. These include processing datasets of moderate size on a single computer or running batches of simulations or analysis pipelines, e.g., in the context of sensitivity analyses or inferences on model parameters. library that has been extracted for reproducibility purpose.Further maintenance and new developments will happen in the fastscapelib's main repository: https://github.com/fastscape-lem/fastscapelib. Figure 1 . Figure 1.Simulation of the evolution of an escarpment over 70 time steps of 1, 000 years each and on a 1024 × 256 regular grid, using the FastScape model (Braun and Willett, 2013, see also section 3 below).The grid nodes of the leftmost column (boundary nodes) have a fixed elevation while the initial elevation of the other nodes corresponds to a 500 m high flat surface with small random perturbations.Using the same set of model parameters, simulation results are shown (a) without and (b) with correction of flow routing in local depressions at each time step.As illustrated, flow path disruptions in (a) cause a much slower migration of the escarpment, while the topography predicted in (b) Figure 2 . Figure 2. Illustration of the inputs and the first steps of the proposed flow routing algorithm.(a)The input topography is defined on top of a mesh by a set of nodes and edges.A single edge is selected for each node, it connects the node to its flow receiver, i.e., its neighbor with the steepest slope.Nodes with no receiver are local minima (colored in the figure).(b) All the nodes that flow to a same local minimum belong to the same basin.A graph of basins is created by connecting together adjacent basins with links, which are materialized on the mesh by edges representing the passes, i.e., the crossings of lowest elevation that connect each pair of basins (black thick arrows).(c) Some of the links are selected by computing a minimum spanning tree and the corresponding passes are oriented in the direction of the flow across the basins (unidirectional arrows).This structure is then used to update the flow receivers so that the flow reaches the domain boundaries without being interrupted. is one of the most classical algorithms used for computing minimum spanning trees and is known to have a O(m log m) complexity, where m is the number of links.The number of links being always bounded by a linear function of the number n of nodes in the grid (Euler formula), using this algorithm induces a global upper bound of O(n log n) Sort all links L by increasing weight z pass(L) for each Link L = (B1, B2) do if Find(B1) = Find(B2) then Add (B1, B2) to the Basin Tree Union(B1, B2) Figure 3 . Figure 3.Our algorithm of flow enforcement run on a synthetic case.(a) Hillshade and contour plot of the input topography with apparent depressions.(b) basins (areas of unique, random colors) and all passes connecting adjacent basins (thin black lines).(c) Flow directions across the basins (white arrows), as resulting from the computation of a minimum spanning tree from the basin graph, and water level (blue areas) after some erosion is applied on the input topography. 3. 2 Effect of flow path enforcement strategies on eroded topographies Figure 4 . Figure 4. Demonstration of the effect of flow path enforcement on erosion, using different strategies of flow receivers "correction" within inner basins.(a) Hillshade and contour plot of the initial topography.(b), (c) and (d) Hillshade and contour plot of the topography obtained after running a single time step of 5, 000 years with channel erosion only (no uplift), with flow receivers updated using each of the different strategies described in section 2.3.Water level is shown in blue, and is computed by propagating the spill elevation while parsing the nodes in the upstream order (based on updated donors). (b) that this strategy doesn't allow channel erosion to propagate much from the boundary node into the closed depression.In fact, drainage area values close to the boundary node are high enough to trigger erosion but the low values of drainage area in the vicinity (within the depression) prevents further propagation of the erosion wave.Depression carving.Unlike the former strategy and as expected, Figure 4 (c) shows that the depression carving strategy allows erosion to propagate toward the local minimum along a narrow and deep trench.Depression filling.Using the depression filling strategy, flow receivers are updated over a large area of the depression as if the water surface was replaced by a very gentle slope toward the spill.As a result, erosion affects a great part of the modelled domain, with the emergence of a star-like pattern centered at the spill (Figure4 (d)).The number and disposition of the branches of the star are due to the grid 8-connectivity used here. Figure 5 . Figure 5. Results from benchmarks assessing the performance of our algorithm for local minima resolution -including both O(n log n) Kurskal's and O(n) Mareš' variants for computing the minimum spanning tree, compared to three other solutions based on variants of the Priority-Flood depression filling algorithm proposed by Barnes et al. (2014a), Zhou et al. (2016) and Wei et al. (2018), respectively.See text for more details about the setup of these benchmarks.(a) Execution time measured for local minima resolution applied once on a synthetic input topography vs. total number of local minima generated in the input topography.(b) Evolution of the number of local minima detected in the topography obtained at each of the first 20 time steps of a FastScape model run.Each curve corresponds to a given magnitude of random perturbations added to produce spatially variable uplift rates (magnitude values are relative to a fixed uplift rate of 5 × 10 −3 m y −1 ).
7,239.4
2018-12-10T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Impairment of Neuronal Glutamate Uptake and Modulation of the Glutamate Transporter GLT-1 Induced by Retinal Ischemia Excitotoxicity has been implicated in the retinal neuronal loss in several ocular pathologies including glaucoma. Dysfunction of Excitatory Amino Acid Transporters is often a key component of the cascade leading to excitotoxic cell death. In the retina, glutamate transport is mainly operated by the glial glutamate transporter GLAST and the neuronal transporter GLT-1. In this study we evaluated the expression of GLAST and GLT-1 in a rat model of acute glaucoma based on the transient increase of intraocular pressure (IOP) and characterized by high glutamate levels during the reperfusion that follows the ischemic event associated with raised IOP. No changes were reported in GLAST expression while, at neuronal level, a reduction of glutamate uptake and of transporter reversal-mediated glutamate release was observed in isolated retinal synaptosomes. This was accompanied by modulation of GLT-1 expression leading to the reduction of the canonical 65 kDa form and upregulation of a GLT-1-related 38 kDa protein. These results support a role for neuronal transporters in glutamate accumulation observed in the retina following an ischemic event and suggest the presence of a GLT-1 neuronal new alternative splice variant, induced in response to the detrimental stimulus. Introduction L-glutamate is the major excitatory neurotransmitter in the Central Nervous System including the retina, where it is released by photoreceptors, bipolar and ganglion cells [1,2] and is responsible for the transmission of the light signal. The physiologic concentration of glutamate at the synaptic cleft is maintained by Na + -dependent, high-affinity transporters identified as Excitatory Amino Acid Transporters (EAATs), which are located on both neurons and glia [3]. In the retina, four out of the five known EAATs have been described: EAAT1 (also known as GLAST) expressed by Mller cells; EAAT2 (glutamate transporter-1; GLT-1) localized on photoreceptors and bipolar cells; EAAT3 (EAAC1) detected in horizontal, ganglion and some amacrine cells; EAAT5 is associated with photoreceptors and bipolar cells [4,5]. Besides its role as neurotransmitter, glutamate is also a potent neurotoxin [6,7], therefore the efficiency of glutamate transporters is crucial not only to terminate the excitatory signal, but also to prevent the excitotoxic neuronal damage [8][9][10]. Many experimental evidence suggest that excitotoxicity is one of the main factors involved in ganglion cell death observed during retinal hypoxic/ischemic events [11][12][13][14] which are common in several ocular pathologies including diabetic retinopathy, retinal and choroidal vessels occlusion and glaucoma [15][16][17]. This hypothesis is strongly supported by the neuroprotection afforded by intravitreal or systemic treatment with NMDA and non-NMDA receptor antagonists [11,13,18,19] or by the open channel blocker memantine [20,21] in acute and chronic models of retinal ganglion cells (RGCs) death. As for other neurodegenerative disorders characterized by excitotoxic events, dysfunction of glutamate transporters has been found as part of the cascade leading to retinal neuronal death under different experimental and clinical pathological conditions [22,23]. However, the role of EAATs in retinal injuries, and in particular under retinal ischemia/reperfusion, remains controversial [24][25][26]. Most of the available data are related to the ischemic phase of retinal injury, while less is known on the role of EAATs during the reperfusion phase, which is crucial for the damage propagation and therefore the extent of neuronal death. Further-more, due to their relevance in glutamate clearance, several studies focused on glial glutamate transporters while fewer information have been gained on the role of neuronal glutamate transporters. Aim of this study was to further explore the function of EAATs under ischemic retinal conditions, and to extend our knowledge on their role during the following reperfusion phase. To this end, we examined the expression of GLAST and GLT-1 in a model of acute retinal ischemia induced by transient increase of IOP and characterized by high glutamate levels during the reperfusion phase [27]. GLT-1 and GLAST modulation under retinal ischemia/ reperfusion We have previously reported a significant increase of vitreal glutamate in the ischemic retina that peaks after 150 min of reperfusion [27]. To investigate whether or not this event was associated with a modulation of glutamate transporters, the distribution of the two most abundant EAATs in the retina, i.e. GLAST and GLT-1 [33][34][35], has been evaluated by immunofluorescence. In the control retina, GLAST immunoreactivity was diffused from the outer to the inner limiting membrane (Figure 1, CTL) and no changes in its expression were detected in the ischemic retina after 150 min of reperfusion ( Figure 1, ISCH/REP). It is established that retinal GLAST expression is limited to astrocytes and Mller cells whereas GLT-1 is found in neurons, mainly on photoreceptors and various types of bipolar cells [36,37]. In agreement with this distribution, here GLT-1 was expressed in bipolar cells bodies of the inner nuclear layer (INL) and in bipolar cells processes and photoreceptors terminals at the inner and outer plexiform layers (IPL, OPL) under control conditions (Figure 2, CTL). The pattern of GLT-1 expression was increased after ischemia followed by 150 min of reperfusion ( Retinal ischemia decreases [ 3 H]-D-Aspartate uptake In order to evaluate if the increased GLT-1 immunoreactivity was paralleled by a modulation of the neuronal transporters activity, we isolated the retinal synaptic terminals (synaptosomes) and performed uptake experiments using the non-metabolizable glutamate analogue [ 3 H]-D-Aspartate ([ 3 H]-D-Asp). As shown in Figure 3A the apparent Km value for [ 3 H]-D-Asp uptake did not significantly change between ischemic (39.2464.49 mM) and contralateral synaptosomes (48.4466.12 mM). However, a significant difference could be detected for the Vmax value (6.0460.32 and 10.4260.40 nmol/min/mg protein in ischemic and contralateral retina, respectively; p,0.05) leading to a 42% reduction of [ 3 H]-D-Asp uptake in the ischemic synaptosomes. Ischemia/reperfusion reduces transporter-mediated [ 3 H]-D-Aspartate release in retinal synaptosomes To further characterize the functional role of GLT-1 in the ischemic retina, we performed release experiments aimed to (15 mM). In this experimental condition, the tritium release evoked by KCl in the naïve retina was 2.5860.56% of the total tritium content; total transmitter release in the contralateral and ischemic eyes did not vary significantly compared to naïve eye being 2.8560.52% and 3.0360.32%, respectively. However, the mechanism underlying the K + -evoked [ 3 H]-D-Asp release was different between control and ischemic synaptosomes. In particular, while the Ca 2+dependent release of [ 3 H]-D-Asp accounted for 40% of the total release in all groups ( Figure 3B), the percentage of the transportermediated release differed between groups. In fact, exposure of synaptosomes from non-ischemic retinas to the glutamate transporters inhibitor DL-TBOA (100 mM) reduced the evoked-release of [ 3 H]-D-Asp by 60%, while this was reduced by only 26% in the ischemic synaptosomes. Ischemia/reperfusion reduced the levels of mature GLT-1 The lower Vmax value detected for the ischemic retinas in the uptake studies and the reduced transporter reversal-mediated release of [ 3 H]-D-Asp could be due to an altered activity and/or a reduced expression of the neuronal transporters at the synaptic buttons. To address this point we performed immunoblotting experiments with purified synaptosomes that revealed a 16% decrease of expression of the 65 kDa band, corresponding to the mature GLT-1, at the end of ischemia and a more pronounced and statistically significant reduction (52%) after 150 minutes of reperfusion ( Figure 4A). Interestingly, at this same time point, a significant reduction of GLT-1 expression, was observed also in total retinal extracts from ischemic eye when compared to the contralateral eye ( Figure 4B). These data mirror the decreased Vmax reported in the uptake experiments and the reduced transporter-mediated component in the release experiments but did not support the previously Retinal increase of GLT-1 is dependent on new protein synthesis To rule out the possibility that a technical artifact might account for the discrepancy between biochemical and immunohistochemical data we evaluated if the increased GLT-1 immunoreactivity under ischemic condition was sensitive to pharmacological modulation. Intravitreal administration of the protein synthesis inhibitor cycloheximide (CHX; 50 micrograms/eye), at the end of the ischemic period, prevented the increase of GLT-1 immunoreactivity observed at 150 min of reperfusion ( Figure 5). This finding confirms the genuine nature of the transporter upregulation detected by immunofluorescence and demonstrates that GLT-1 synthesis occurs during the reperfusion phase. GLT-1 expression increased under native conditions On the basis of the latter results, it can be hypothesized that the discrepancy alluded to above could rise from the GLT-1 protein state in the two experimental settings (native in immunofluorescence and denatured in western blotting) and consequently from the ability of the used antibody to recognize the same epitopes. Therefore, we analyzed the total retinal extracts by western blotting under native conditions. The result obtained was consistent with the immunofluorescence data showing a significant increase of GLT-1 immunoreactivity after 150 min of reperfusion when compared to non-ischemic eye ( Figure 6A). In order to resolve the native complex, this was separated in a second dimension under denaturing and reducing conditions. Surprisingly, in the complex we detected the presence of a protein with an immunoreactive molecular weight of approximately 38 kDa that was significantly upregulated in the ischemic retina, while only weak immunoreactivity was reported for the band at 65 kDa, given as the mature GLT-1 ( Figure 6B). Discussion Elevation of extracellular glutamate is a key factor in retinal neurodegeneration occurring in glaucoma and other retinal pathologies characterized by ischemic events [12,13,38,39]. The activity of sodium-dependent high affinity EAATs is the primary mechanism to maintain glutamate homeostasis and alterations of their expression and function have been found in several neurological disorders [22,23]. Here we studied the modulation of two out of five EAATs present in the retina, i.e. GLAST and GLT-1 during retinal ischemia/reperfusion, showing a decrease of neuronal glutamate uptake associated with a significant modula- tion of GLT-1 while no significant changes of GLAST expression were evident. GLAST, expressed by Mller cells [34,40], is the predominant glutamate transporter in the retina and it is primarily responsible for uptake under physiological conditions [5,41]. In our experimental system, we reported no changes in GLAST expression and distribution during the reperfusion phase. This result is in agreement with previous experimental observations that showed no modulation of GLAST after ischemia induced by optic nerve ligation [33], following IOP increase by laser photocoagulation of trabecular meshwork [42] or episcleral vein cauterization [43]. Previous work suggests that GLAST function is compromised during retinal ischemia but it is regained during reperfusion [33] even though the transporter saturates at lower glutamate concentrations compared to physiological conditions [44]. Altogether, these data suggest that alterations of glial transporters would, if any, only partially contribute to the glutamate increase that we have previously reported following an ischemic event induced by transient elevation of IOP [13,27]. At variance with the latter conclusion, our present functional data on glutamate uptake in retinal synaptosomes showed a significant reduction of glutamate transport in the nerve terminals under ischemia/reperfusion suggesting that the failure of neuronal transporters may be a key component in the accumulation of extracellular glutamate observed in vivo [27]. In contrast to other areas of the brain, GLT-1 (EAAT2) is expressed in the retina only by neurons and several studies have pointed out a role for this transporter subtype in the pathology of neurodegenerative diseases including glaucoma [42,43]. Moreover, the presence of GLT-1 on bipolar cells near their synapses with RGCs suggests that GLT-1 activity may be crucial in regulating glutamate concentration around RGCs (24), the cellular subtype known to degenerate during glaucoma [45]. Paralleling the above functional studies, we also observed a reduction of GLT-1 in total extracts and in synaptosomal fraction from retinas subjected to ischemia followed by reperfusion. Again, this may account for the altered glutamate clearance at the synaptic cleft during reperfusion and, therefore, for the excitotoxic retinal neuronal death even when GLAST expression is unchanged. This hypothesis is supported by the evidence that in rat treatment with antisense oligonucleotides against GLT-1 increases vitreal glutamate levels leading to ganglion cell death [10]. Likewise, retinal damage following ischemia is exacerbated in GLT-1 deficient mice, though the effect is milder when compared to GLAST knockdown [40]. The reduction of GLT-1 expression observed in our study is in agreement with data previously reported in other experimental models of glaucoma. Indeed, a decrease of GLT-1 was found after trabecular laser treatment [24] and in transgenic mice bearing spontaneous ocular hypertension [26]. Vice versa, GLT-1 was increased in photoreceptors and bipolar cells from eye subjected to episcleral vein cauterization [43]. GLT-1 down-regulation and consequent glutamate increase have also been reported following focal cerebral ischemia in the rat cortex [46] and global ischemia in astrocytes derived from hippocampus [47]. The differential regulation of this transporter reported under different stress conditions would suggest that GLT-1 regulation is strictly dependent on the neuronal area affected as well as on the type of detrimental stimulus applied. Although the immunoblotting and functional data reported here concord with a decrease of GLT-1 expression, immunohistochemistry experiments reported an opposite trend, clearly showing a protein synthesis-dependent over-expression of GLT-1 after ischemia/reperfusion. In the insulted retina, controversial outcomes have often been reported using different analytical methods to test the expression of glutamate transporters. For instance, Martin and colleagues described a reduction of GLT-1 by immunoblotting but no changes by immunohistochemistry [24]. These investigators ascribed the conflicting findings to limitations of the methodologies. However, the latter conclusion does not explain satisfactorily our conflicting data, due to a number of reasons. In fact, the GLT-1 localization we reported here is consistent with its cellular distribution as previously described in the retina [34,40]. Moreover, modulation of the increased GLT-1 immunoreactivity by CHX, an inhibitor of protein synthesis, supports the hypothesis of an authentic upregulation of GLT-1 at the translational level. The explanation for the discrepancy between the results obtained with the two technical approaches might lie in the biophysical state of the target protein (e.g. native in the immunohistochemistry and denatured in the immunoblotting) and, therefore, in the ability of the antibody to recognize it. Indeed, when the electrophoretic separation was performed under native conditions, we did observe an increase of density of the GLT-1 immunoreactive band, according to the results obtained in immunohistochemistry and at variance with immunoblots performed under denaturing conditions. Increase of GLT-1 expression has been shown following hypoxia [48] or over-activation of NMDA receptors [49], either conditions occur in our experimental model [50,51]. However, the increased GLT-1 immunoreactivity reported under native condition is not the consequence of an upregulation of the mature GLT-1 protein but, rather, of a GLT-1 related protein with an approximately 38 kDa molecular weight, as evidenced by bi-dimensional gel electrophoresis. The full length GLT-1 (often referred to as GLT-1a) has an approximate molecular weight, following glycosylation, of about 62 kDa [52]. Together with the original form, several posttranscriptionally regulated isoforms with different molecular weight have been described, including alternative splicing producing different N-and C-termini [53][54][55] and exon-skipping splice variants [56][57][58]. The expression of some of these variants is induced in response to selective cell injuries and alteration of the GLT-1 isoforms relative expression has been reported in some neurodegenerative diseases [59]. The functional variant referred to as GLT-1c is abundant in the retina and is expressed, under physiological conditions, only in photoreceptors [60]. However, its expression pattern changes in human and experimental glaucoma; in the latter, it is expressed also in RGCs [42]. Some isoforms appear to differ for their subcellular localization as well; for instance GLT-1b, unlike GLT-1 which is mainly detected in the membrane of astrocytes, is detected in neurons and astrocytes cytoplasm [61,62]. Our results showed that, following ischemia, GLT-1 immunoreactivity increases mainly in the perinuclear area of bipolar cells and along their processes. This suggests that, following the reduction of the original GLT-1 levels, there is a compensatory mechanism triggered by hypoxia, glutamate or by other factors inducing the expression of an alternative GLT-1 splicing form that accumulates mainly in neuronal soma. In conclusion, our data support a role for neuronal transporters in the glutamate accumulation observed in the retina following an ischemic event and suggest the presence of a GLT-1 neuronal new alternative splice variant, which is probably induced as a tentative compensatory mechanism in response to the detrimental stimulus. Likewise for other splice variants already described in the literature, we are unable to speculate on the function or the ability of this 38kDa isoform to generate functional transporters; therefore, further experiments will be needed in order to address these questions. Ethics statement Animal care and experimental procedures were carried out in accordance with the guidelines of the Italian Ministry of Health (DM 116/1992). The protocol (Protocol Number 110000351) was dealt with for the ethical and animal care aspects and approved by the Committee set by the Ministry of Health at the National Institute of Health (Rome). All surgical procedures were performed under deep anesthesia and all efforts were made to minimize suffering. Retinal ischemia injury Adult male Wistar rats (280-330 g) were purchased from Charles River (Lecco, Italy). Animals were housed under a 12 h light-dark cycle with ad libitum access to food and water. Retinal ischemia was induced by acutely increasing the IOP as previously described [18]. Animals were deeply anesthetized by intraperitoneal injection of chloral hydrate (400 mg/Kg) and laid on a heating pad to maintain the body temperature at 37uC. Topical anesthesia was induced by 0.4% oxibuprocain eye drops (Novesina, Novartis, Varese, Italy). A 27-gauge infusion needle, connected to a 500 ml bottle of sterile saline, was inserted in the anterior chamber of the right eye, and the saline container was elevated to produce an IOP of 120 mmHg for 50 min. Retinal ischemia was confirmed by whitening of the fundus. For each animal, the left eye was used as non-ischemic control. Body temperature was monitored before, during and after ischemia, and animals with value lower than 35.5uC were excluded. The animals were sacrificed by cervical dislocation at the end of the ischemia or at 150 min of reperfusion. Both eyes were immediately enucleated, retinas dissected and processed as described below. Uptake experiments Purified synaptosomes were resuspended in standard medium and suspension aliquots (500 mL) containing 6-9 micrograms of protein were incubated with [ 3 H]-D-Asp (3-10-30-100 mM) for 2 minutes at 37 uC. Each sample was washed three times and filtered through Whatman microporous membranes (GF/B) (Millipore, Billerica, MA, USA). Unspecific [ 3 H]-D-Asp uptake was obtained by performing the same procedure on parallel samples kept in a bath of water and ice while the glutamate transporters were blocked with DL-threo-beta-benzyloxyaspartic acid (DL-TBOA) (10 25 M). The radioactivity on filters was counted in an LKB 1214 Rackbeta liquid scintillation counter. For Release experiments Synaptosomes were incubated at 37uC for 15 min with 0.1 mmol/L [ 3 H]-D-Asp. After incubation, aliquots of the suspension (about 10 mg protein) were layered on microporous filters placed at the bottom of parallel superfusion chambers (Superfusion System, Ugo Basile, Comerio, Varese, Italy) maintained at 37uC and superfused with standard medium at a rate of 0.5 mL/min [30]. In order to equilibrate the system, fractions were superfused for 36 min and then collected as follows: two 3 min samples (t = 36-39 and 45-48 min; basal release) before and after one 6 min sample (t = 39-45 min; K + -evoked release). A 90s period of depolarization, by exposure to KCl 15 mmol/L, substituting for an isosmotic NaCl concentration, was applied at t = 39 min. When appropriate, DL-TBOA was added 9 min before KCl; Ca 2+ -free medium (containing 8.8 mmol/L MgCl 2 ) was introduced 19 min before KCl. [ 3 H]-D-Asp radioactivity was determined in each collected sample and in the superfused filters by Packard Tri-Carb 2111 TR liquid scintillation counter. The amount of released [ 3 H]-D-Asp in each collected sample was expressed as percentage of total synaptosomal radioactivity content at the beginning of the respective collection period (fractional rate6100). Depolarization-evoked neurotransmitter overflow was estimated by subtracting the transmitter content of the two 3-min samples representing the basal release from that in the 6-min sample collected during and after the depolarization pulse. Appropriate controls were always ran in parallel. Gel electrophoresis One-dimensional gel electrophoresis: SDS or Native-PAGE. For western blotting analysis under reducing and denaturing conditions, equal amount (8-15 mg) of proteins were resolved by 10% sodium dodecyl sulfate (SDS)-polyacrilamide gel electrophoresis (PAGE). To analyze the protein of interest under native conditions 30 mg of total proteins were separated under non-denaturing conditions using a 5% stacking and a 6% separating native-polyacrylamide gel. After separation, proteins were transferred onto PVDF membranes (Immobilon-P, Sigma-Aldrich, Milan, Italy). For native samples, amido-black staining was used as loading control. Two-dimensional gel electrophoresis: Native/SDS-PAGE. Gel lanes containing total proteins were cut out from Native-PAGE with a razor blade. Each lane was incubated with gentle agitation on a glass plate in a dissociating solution 1% (w/v) SDS and 1% (v/v) 2-mercaptoethanol for 1 h at RT and briefly rinsed with bi-distilled water. After three washes with SDS-PAGE electrophoresis buffer (25 mM Tris-HCl, 192 mM glycine and 0,1% (w/v) SDS; pH 8.3) the strip was rotated through 90u and placed onto SDS-PAGE using a 5% stacking and 10% separating gel. Native/SDS-PAGE containing total proteins was transferred onto PVDF membrane. Intravitreal administrations Cycloheximide (CHX) (Sigma-Aldrich, Milan, Italy), a protein synthesis inhibitor [31,32], was dissolved in sterile aqueous solution. Intravitreal injection was performed by puncturing the eye with a 23-gauge needle at the cornea-sclera junction and the drug was administered with a 5 ml Hamilton syringe (Bonaduz, GR, Switzerland). CHX (50 mg/3 ml/eye) or equal volume of control solution was administered at the end of the ischemia. The duration of the injection was 2 min in all instances. Animals were killed after 150 min of reperfusion and subjects with visible lens damage or vitreous hemorrhage were excluded from the study. Immunohistochemistry Enucleated eyes were fixed in 2% paraformaldehyde (PFA) at 4uC for 10 min; after removal of the anterior segment, the posterior was fixed in 4% PFA for 60 min and cryopreserved in 30% sucrose overnight. Specimens were frozen in Optimal Cutting Temperature medium (Tissue-Tek, Sakura Finetek Europe, Alphen an den Rijn, The Netherlands), and 16-mm cryostat sections were cut, mounted onto Superfrost ultra plus glass slide (Menzel-Glä ser, Braunschweig, Germany) and stored at 280uC until used. Retinal sections were washed in 0.1 M PBS (pH 7.4), permeabilized with 0.3% Triton for 45 min and blocked with 10% donkey serum (Sigma-Aldrich, Milan, Italy) at RT for 1 h. Slides were incubated overnight with rabbit anti-GLT-1 (1:50, Cell Signaling Technology, Beverly, MA, USA) or mouse anti-GLAST (1:300; code ab 41751, ABcam, Cambridge, UK). Immunofluorescence labeling was performed by incubation with anti-rabbit Alexa Fluor 488 (1:250) or anti-mouse Alexa Fluor 488 (1:500; Molecular Probes, Eugene, OR, USA) at RT for 1 h. Sections were mounted with Vectashield mounting media with DAPI to label the nuclei (Vector Laboratories, Burlingame, CA, USA). Image acquisition was performed using a confocal microscope (Leica TC-SP2 Confocal System; Leica Microsystems Srl, Milan, Italy). Statistical analysis Data are given as mean 6 S.E.M. of three to eight independent experiments and statistically evaluated for differences by Student's t-test or by one-way analysis of variance, followed by Tukey-Kramer test for multiple comparisons. A value of p,0.05 was considered to be statistically significant.
5,388.6
2013-08-06T00:00:00.000
[ "Biology", "Chemistry" ]
On a posteriori error bounds for approximations of the generalized Stokes problem generated by the Uzawa algorithm Abstract In this paper, we derive computable a posteriori error bounds for approximations computed by the Uzawa algorithm for the generalized Stokes problem. We show that for each Uzawa iteration both the velocity error and the pressure error are bounded from above by a constant multiplied by the L2-norm of the divergence of the velocity. The derivation of the estimates essentially uses a posteriori estimates of the functional type for the Stokes problem. Introduction Let Ω ∈ R n be a bounded connected domain with a Lipschitz continuous boundary ∂ Ω. Henceforth, we use the space of vector valued functions where M n×n is the space of symmetric n × n-matrices (tensors). The scalar product of tensors is denoted by two dots (:), and the L 2 norm of Σ is denoted by · Σ . The L 2 norm of scalar and vector valued functions is denoted by · . ByS(Ω) we denote the closure of smooth solenoidal functions w with compact supports in Ω with respect to the norm ∇w Σ . Let V 0 (Ω, R n ) denote the subspace 322 I. Anjam, M. Nokka, and S. I. Repin of V (Ω, R n ) that consists of functions with zero traces on ∂ Ω. The space of scalar valued square summable functions with zero mean is denoted by L 2 (Ω, R). The classical statement of the generalized Stokes problem consists of finding a velocity field u ∈S(Ω) + u D and pressure p ∈ L 2 (Ω) which satisfy the relations −Div (ν∇u) where f ∈ L 2 (Ω, R n ), and ∂ Ω u D · n dx = 0. The generalized solution of (1. The quantity in L A is called the augmented Lagrangian (in which λ ∈ R + ). We have From the right-hand side inequalities we see that Ω (p − q)div u dx = 0 for all q ∈ L 2 , from which we conclude that div u = 0. From the left-hand side inequalities it follows that for any solenoidal v we have On a posteriori error bounds 323 Indeed, the exact solution of the problems is (u, p). For a detailed exposition of this subject, we refer to [4]. Finding approximations of (u, p) can be performed by the Uzawa algorithm presented below. For the Lagrangian L, we have the problem: Find u k ∈ V 0 + u D such that: For the Lagrangian L A , we have the problem: Find u k ∈ V 0 + u D such that: (1.7) 4: Set k = k + 1 and go to step 2. Our goal is to deduce computable bounds of the difference between u k and the exact solution u in terms of the energy norms provided that 0 < ρ < 2 min(ν, µ) (1.8) and p 0 ∈ L 2 (Ω). If µ ≡ 0, the condition is 0 < ρ < 2ν. These conditions are the same for both (1.5) and (1.6). Proof. The proof is based on well known arguments (see, e.g., [13]). However, for the convenience of the reader, we present the proof for the generalized Stokes problem, in the case of (1.5). The exact solution of the generalized Stokes problem satisfies the relation We set w = u k − u and subtract (1.9) from (1.5), which gives Let v k := u k − u and q k := p k − p. Then we rewrite this relation in the form On the other hand, (1.7) is equivalent to . By combining (1.10) and (1.11), we obtain On a posteriori error bounds 325 where δ ∈ (0, 1). Note that and, therefore, (1.12) implies the estimates Now, we sum inequalities (1.13) for k = 0, . . . , N and find that (1.14) Because of condition (1.8), there exists a δ * ∈ (0, 1) such that We set δ = δ * in (1.14), and see that Also, we see that q k = p k − p is bounded in L 2 (Ω), so p k is bounded in L 2 (Ω). We also observe from (1.14), that so we can extract from p k a subsequence p k ′ , which converges to some element p * weakly in L 2 (Ω). The equation (1.5) gives in the limit and by comparison to (1.9) we find that In other words, the sequence p k ′ converges weakly to p in L 2 (Ω) However, if p 0 ∈ L 2 , then it is easy to see from (1.7) that p k ∈ L 2 with all k. From this we make the conclusion that the sequence p k ′ converges weakly to p in L 2 (Ω). Error estimates for exact solutions generated by the Uzawa algorithm In this section, we show that the errors of approximations generated by the Uzawa algorithm are controlled by the L 2 -norm of the divergence of the velocity. First, we compare approximations computed on two consequent iterations and establish the following result. In addition, for (1.6) we also have Proof. The equation for pressure (2.2) follows directly from (1.7). By subtracting the kth equation (1.5) from the (k + 1)th equation, we obtain we can estimate the right-hand side with Henceforth, we will use functional a posteriori error estimates for the Stokes problem derived in [11,12]. For a consequent exposition of the theory of functional a posteriori error estimates we refer the reader to [8,10]. For some simple domains the constant C LBB , or the bounds for it, are known (see, e.g., [3,5,9]). Lemma 2.1 implies an important corollary. Let v ∈ V 0 , and div v = g. Then there exists a function v g ∈ V 0 such that div(v − v g ) = 0, and This means that there exists a solenoidal field Thus, we can find a function w 0 ∈S(Ω) + u D such that With the help of (2.4) we can now derive our main results. We show that the errors of u k and p k generated on the iteration k of the Uzawa algorithm are both estimated from above by the L 2 -norm of the divergence of u k multiplied by a constant depending on C LBB . The proofs are based on the derivation of functional a posteriori error estimates for the generalized Stokes problem as they are presented in [12]. Here C F is the constant in the Friedrichs inequality w C F ∇w Σ and C LBB is the constant in the LBB-condition. Proof. Let u 0 ∈S(Ω) + u D be such that, by using (2.4), we have Let the pair (u k , p k ) be an approximation of the saddle point computed on the iteration k. We can now write First, we estimate from above the first term on the right-hand side of (2.8). Let w ∈S. By subtracting the integral Ω (ν∇u 0 : ∇w + µu 0 · w) dx from both sides of (1.4) we obtain It is easy to see that Ω (Div τ · w + τ : ∇w)dx = 0 ∀τ ∈ Σ(Div, Ω), w ∈ V 0 (Ω) (2.10) and By adding (2.10) and (2.11) to the right-hand side of (2.9), we rewrite it in the form which is equivalent to Let us choose τ = ν∇u k and q = p k . In view of (1.5), we see that that the first integral of (2.13) vanishes. Indeed, Since w is a function fromS, the same conclusion is also true if u k has been calculated by (1.6). We combine (2.9) with (2.12)-(2.14), and arrive at the relation The right-hand side of (2.15) can be estimated from above as follows: where we have used the Cauchy-Schwarz inequality. We set w = u − u 0 , and find that Note that for all w ∈ V we have We substitute (2.17) into (2.8), and use (2.18) with w = u − u 0 , and obtain In order to prove a similar estimate for the pressure, we also need Lemma 2.1. Let q ∈ L 2 be an approximation of the exact pressure p. Then (p − q) ∈ L 2 and there exists a function w ∈ V 0 such that div(w) = p − q (2.20) and where C = 2C 2 for (1.5), and C = 2C 2 + λ for (1.6). Proof. We use (2.20) for q = p k and obtain Multiplying (1.1) by w and integrating over Ω, we obtain In view of this relation, we have We use (2.10) with w = w, and arrive at the relation As before, we choose τ = ν∇u k , and observe that the first integral is zero. By estimating the latter integral with the help of the same arguments as in (2.16), we find that p − p k 2 ||| u − u k ||| ||| w ||| . (2.24) By (2.18) and (2.21), we obtain where C is defined in (2.6). Substituting (2.25) into (2.24) results in the estimate In the case of (1.6), we add Ω λ div(u k − u k )div w dx = 0 to (2.23) and obtain Again, we choose τ = ν∇u k , and see from (1.6) that the first and second integrals are zero. By estimating the latter integral with same arguments as in (2.16), we obtain Recall that div w = p − p k . Now, (2.25) and (2.26) imply the estimate Applying Theorem 2.2 results in (2.22). By Theorems 2.2 and 2.3, we easily conclude the following statement. Remark 2.1. The classical Stokes problem corresponds to the case where µ ≡ 0 and ν is a constant. Let (u k , p k ) be the exact solution computed on the iteration k of the Uzawa algorithm, for the Stokes problem. Then, for velocity we have (for both cases (1.5) and (1.6)) For the pressure we have p − p k C div u k whereC = 2C −2 LBB ν for (1.5) andC = 2C −2 LBB ν + λ for (1.6). Computable error estimates for approximations generated by the Uzawa algorithm Let T h be a mesh having the characteristic size h, and let the spaces V 0h (Ω, R n ) and Q h (Ω) be finite dimensional subspaces of V 0 (Ω, R n ) and L 2 (Ω), respectively. We assume that for all v h ∈ V 0h + u D it holds that div v h ∈ Q h . We also assume that the spaces are constructed so that they satisfy the discrete LBB-condition, i.e, for any q h ∈ Q h with zero mean, there exists v h ∈ V 0h such that where the positive constant c does not depend on h. Let u k h ∈ V 0h + u D be an approximation of u k calculated on the mesh T h . We need to combine the error of the pure Uzawa algorithm with the approximation error. Below we present the corresponding results, where we set p k = p k h ∈ Q h on the iteration k, and understand u k as satisfying (1.5), or (1.6), with the chosen p k h . Then, the pair (u k , p k h ) can be viewed as the exact pair associated with the Uzawa algorithm on iteration k. Our first goal is to derive fully computable error majorants M k ⊕ and M k,λ ⊕ for approximate solutions (e.g., u k h ) of the problems generated at the first step of Uzawa algorithm by the Lagrangians L and L A , respectively. In order to make the quality of the majorants robust with respect to small or large values of the material functions ν or µ, we apply the same method that was suggested in [12] for the generalized Stokes problem. Later we combine these estimates with the estimates of the difference between u and u k and obtain estimates applicable for approximate solutions computed within the framework of finite dimensional approximations. First, we prove the following result for the problem generated by the Lagrangian L. 4) Here I denotes the unit tensor. Proof. By equation (1.5) we have We subtract the integral Ω ν∇u k h : ∇w + µu k h · w dx from both sides of the above equation, and obtain By adding (2.10) to the right-hand side of (3.5) we have where we have used the notation (3.3) and (3.4). Note that where 0 α(x) 1. Also, we have By (3.7) and (3.8) the right-hand side of (3.6) becomes We set w = u k − u k h , use (3.6) and (3.9), and obtain It is easy to see that the optimal value of α is defined by the relation so that (3.10) implies the estimate where we have used the notation (3.1) and (3.2). Remark 3.1. It is easy to see that the upper bound M k ⊕ is sharp. Indeed, by setting τ = ν∇u k − Ip k h , and letting β tend to infinity, we get the exact error in the energy norm ||| · |||. Proof. Indeed, from (3.15) and (3.16) we find that Applying the error bounds presented in Theorems 3.1 and 3.2 completes the proof. This paper is focused on theoretical analysis of a posteriori error bounds for approximations computed by the Uzawa algorithm. However, it is worth adding some comments on the practical applications of the above derived error majorants. The majorants contain the function τ ∈ H(Div, Ω) and a positive parameter β , which in general can be taken arbitrary. Getting sharp estimates requires a proper selection of them. Finding an optimal β leads to a one-dimensional optimization problem which is easy solvable. The reconstruction of the stress tensor τ based upon computed functions u k h and p k h provides a reasonable first guess. A better selection can be performed by methods that have been developed and tested for various elliptic problems (see, e.g., [8,10,14] and the references cited therein). A systematical study of computational questions in the context of above derived estimates will be exposed in a separate paper, which is now in preparation.
3,390.8
2012-01-01T00:00:00.000
[ "Mathematics" ]
Fine Resolution Probabilistic Land Cover Classification of Landscapes in the Southeastern United States Land cover classification provides valuable information for prioritizing management and conservation operations across large landscapes. Current regional scale land cover geospatial products within the United States have a spatial resolution that is too coarse to provide the necessary information for operations at the local and project scales. This paper describes a methodology that uses recent advances in spatial analysis software to create a land cover classification over a large region in the southeastern United States at a fine (1 m) spatial resolution. This methodology used image texture metrics and principle components derived from National Agriculture Imagery Program (NAIP) aerial photographic imagery, visually classified locations, and a softmax neural network model. The model efficiently produced classification surfaces at 1 m resolution across roughly 11.6 million hectares (28.8 million acres) with less than 10% average error in modeled probability. The classification surfaces consist of probability estimates of 13 visually distinct classes for each 1 m cell across the study area. This methodology and the tools used in this study constitute a highly flexible fine resolution land cover classification that can be applied across large extents using standard computer hardware, common and open source software and publicly available imagery. Introduction Land cover classification is a common remote sensing process that assigns classes to geographic areas based on remotely sensed data.Classifications are typically conducted on a per-cell basis and fit into two broad categories, supervised or unsupervised.In unsupervised classification raster cells are grouped prior to classification, while in supervised classification an analyst assigns a subset of cells to train the classification algorithm [1].Land cover classifications are versatile and often used in climate modeling [2], biodiversity monitoring [3], studies of landscape change [4] and land use planning [5].In forest management, land cover classifications are frequently used to inform management activities such as timber harvest [6], forest restoration [7], fire risk mitigation [8], and preservation of rare habitats [9].From land cover classification datasets, relevant objectives such as locating forested and non-forested areas [10] or determining the proportion of impervious surface occupying landscape [11] can be quickly addressed.Land cover classifications can also be used as a component of more complex analyses of landscape characteristics [12] and can be used to describe important characteristics of forest and woodland ecosystems, such as percent canopy cover, understory composition within open forests, and the degree of fragmentation. Current land cover classification products such as the National Land Cover Database (NLCD) [13] provide a national classification of land cover at a spatial resolution of 30 m.While valuable for many applications and readily available, classifications like NLCD are generally considered too coarse for informing specific forest operations [14], such as prioritization of individual stands for restoration treatments.Examples of fine resolution land cover classifications that can be used across broad extents to plan project-specific operations are relatively scarce.In large part this scarcity is due to processing limitations and the complexities associated with creating fine resolution land cover classification.These same limitations also apply to the types of variables that can be used to describe texture information within imagery and guide classification [15].Because of these limitations, tradeoffs usually occur between spatial resolution and extent, with fine resolution classifications relegated to small spatial extents and large extent classifications limited to coarser resolution imagery. While advances in computer hardware and computationally efficient algorithms can directly address these limitations, much of the recent research into land cover classifications has focused on the classification algorithm used to identify classes in supervised classification [16].Studies have investigated the use of machine learning techniques such as decision tree classifiers [17], artificial neural networks [18], and support vector machines [19] in land cover classification.Alternative classification methodologies such as object-orient classifiers have also received attention recently [20].There has been less focus on addressing the limitations of applying these classification techniques to fine resolution imagery across large extents, with a few notable exceptions [10,21,22]. Some recent work has focused on the use of probabilistic land cover classifications as opposed to using deterministic, or hard classifications [21][22][23].Most land cover classifications, including NLCD, are hard classifications that identify a single deterministic class or most likely class (MLC) for each raster cell or classified area.In contrast, probabilistic classifications provide a probability for each class, which is more versatile in many respects.Probability surfaces can be manipulated and displayed independently or translated into many different types of user defined hard classifications, such as MLC.One recent study found that probabilistic classifications retain more of the information contained in an image [23].Though probabilistic land cover classifications can provide an information rich dataset that can be flexibly applied to answer numerous management questions, the use of probabilistic classifications has not been commonly adopted in the forest management community. Land management organizations have a need for spatially explicit information at fine resolution that can address multiple conservation and restoration questions and that can be used to prioritize and plan restoration activities across ownership boundaries.Useful information includes the description of grasses, shrubs, trees, and non-forested areas, the location of forests suitable for restoration, and characteristics like average tree diameter and forest density-all displayed spatially at fine resolution across the extent of the entire region.Fine resolution probabilistic classifications that cover broad extents can provide such information and can be used to help plan and prioritize project level forest management activities across large landscapes. This study describes a methodology to produce a fine resolution land cover classification that quantifies and maps the spatial patterns of land cover types across a broad extent.This method is being implemented in a large portion of the southeastern US to produce a land cover classification that can help plan and prioritize forest restoration and other land management activities.The study follows the example of recent work on 1 m or finer spatial resolution land cover classifications that uses non-parametric models [17,20], first and second order texture variables [17,23], and probabilistic classifications [24,25].However, this project is unique in combining the advantages of these approaches and applying them at a regional scale over a large landscape.To our knowledge this study produced one of the first landscape scale, fine resolution land cover classifications [10,26] and is the only one performed at this broad extent on a single standard off-the-shelf stock computer. Study Area Our study area consisted of four significant geographic areas (SGAs) in the southeastern United States delineated in the Range-Wide Conservation Plan for Longleaf Pine [27] (Figure 1).These areas have been targeted for focused longleaf pine (Pinus palustris) ecosystem restoration due to the existence of remnant longleaf pine and sites suitable for restoration, as well as the desire by land managers and stakeholders to restore longleaf ecosystems to these areas.Longleaf ecosystems are some of the most critically endangered ecosystems in the world [28].What remains of these once dominant forests supports many rare plants and animals and provides refuge for threatened and endangered species [29]. Study Area Our study area consisted of four significant geographic areas (SGAs) in the southeastern United States delineated in the Range-Wide Conservation Plan for Longleaf Pine [27] (Figure 1).These areas have been targeted for focused longleaf pine (Pinus palustris) ecosystem restoration due to the existence of remnant longleaf pine and sites suitable for restoration, as well as the desire by land managers and stakeholders to restore longleaf ecosystems to these areas.Longleaf ecosystems are some of the most critically endangered ecosystems in the world [28].What remains of these once dominant forests supports many rare plants and animals and provides refuge for threatened and endangered species [29]. Methods Overview In addition to the NLCD, currently available land cover classification products in this study area include the Cooperative Land Cover Dataset (CLCD) [30] and Condition Class for Management (CCM) [31] map from Florida Natural Areas Inventory (FNAI) (Figure 2).The CLCD and CCM are vector and raster-based maps that provide fewer cover types than NLCD and are more directly tailored to longleaf pine, but only partially cover our study area.These land cover datasets, similar to NLCD, have a medium spatial resolution of 30 m to 100 m.The land cover classes at this medium resolution tend to be broad amalgams of the underlying vegetation that provide limited ability to prioritize areas for longleaf restoration or describe forest structure and composition at the stand scale. To describe land cover at fine spatial resolutions, we created probabilistic land cover classification surfaces using United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) aerial photographic imagery that has a spatial resolution of 1 m [32].Imagery like NAIP provides information with the fine spatial resolution needed to inform planning and prioritization, but must be translated into condition classes that are relevant to specific applications by conservation planners and stakeholders. Methods Overview In addition to the NLCD, currently available land cover classification products in this study area include the Cooperative Land Cover Dataset (CLCD) [30] and Condition Class for Management (CCM) [31] map from Florida Natural Areas Inventory (FNAI) (Figure 2).The CLCD and CCM are vector and raster-based maps that provide fewer cover types than NLCD and are more directly tailored to longleaf pine, but only partially cover our study area.These land cover datasets, similar to NLCD, have a medium spatial resolution of 30 m to 100 m.The land cover classes at this medium resolution tend to be broad amalgams of the underlying vegetation that provide limited ability to prioritize areas for longleaf restoration or describe forest structure and composition at the stand scale. To describe land cover at fine spatial resolutions, we created probabilistic land cover classification surfaces using United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) aerial photographic imagery that has a spatial resolution of 1 m [32].Imagery like NAIP provides information with the fine spatial resolution needed to inform planning and prioritization, but must be translated into condition classes that are relevant to specific applications by conservation planners and stakeholders.Our classification approach follows the recommendation of Hogland et al. [24] to produce probabilistic classification outputs from a combination of remotely sensed data and classification information (Figure 3).A series of softmax neural network (SNN) models was produced that links the principal components of NAIP spectral and texture values with a sample of visually interpreted points to produce 1 m probabilistic surfaces for 13 different visually distinct land cover classes (Table 1).Due to state level differences in the base NAIP imagery, different models were produced for each of the three states in our study area (Alabama, Georgia, and Florida; Figure 1).Digital number (DN) values for each of the four NAIP bands were combined with standard deviation and grey level co-occurrence matrix (GLCM) values in singular vector decomposition principal component analysis (PCA) to reduce the dimensionality of the data and quantify patterns within the NAIP imagery [33].Our classification approach follows the recommendation of Hogland et al. [24] to produce probabilistic classification outputs from a combination of remotely sensed data and classification information (Figure 3).A series of softmax neural network (SNN) models was produced that links the principal components of NAIP spectral and texture values with a sample of visually interpreted points to produce 1 m probabilistic surfaces for 13 different visually distinct land cover classes (Table 1).Our classification approach follows the recommendation of Hogland et al. [24] to produce probabilistic classification outputs from a combination of remotely sensed data and classification information (Figure 3).A series of softmax neural network (SNN) models was produced that links the principal components of NAIP spectral and texture values with a sample of visually interpreted points to produce 1 m probabilistic surfaces for 13 different visually distinct land cover classes (Table 1).Due to state level differences in the base NAIP imagery, different models were produced for each of the three states in our study area (Alabama, Georgia, and Florida; Figure 1).Digital number (DN) values for each of the four NAIP bands were combined with standard deviation and grey level co-occurrence matrix (GLCM) values in singular vector decomposition principal component analysis (PCA) to reduce the dimensionality of the data and quantify patterns within the NAIP imagery [33].Due to state level differences in the base NAIP imagery, different models were produced for each of the three states in our study area (Alabama, Georgia, and Florida; Figure 1).Digital number (DN) values for each of the four NAIP bands were combined with standard deviation and grey level co-occurrence matrix (GLCM) values in singular vector decomposition principal component analysis (PCA) to reduce the dimensionality of the data and quantify patterns within the NAIP imagery [33].The first six principle components of the PCA were used as predictor variables in our classification models.Sample points were visually classified in each state and combined with the PCA values to train SNN models.The SNN models for each state were then applied to produce probabilistic land cover classification surfaces for each of the four SGA.All analyses were performed using the RMRS Raster Utility toolbar [34] and ESRI's ArcGIS geographic information system (GIS).To facilitate the tabular, spatial, statistical, and GIS analyses performed, we developed a suite of spatial modeling tools that take advantage of Function Modeling [24,35,36] and parallel processing.These tools work within ESRI's GIS and are available for free download [34].The remainder of this section describes in more detail the datasets and procedures used in this study. Imagery and Data We chose NAIP as our base imagery because of its fine spatial resolution, its complete coverage over the conterminous United States, and the fact that it is freely available for download [37].We acquired NAIP color-infrared imagery flown in the year 2013 in Alabama, Georgia and Florida within our study area.NAIP color-infrared imagery consists of four 8-bit spectral bands (red, green, blue and near-infrared (NIR)) at 1 m spatial resolution.NAIP imagery within our SGAs was primarily collected from August to November of 2013 with parts of the Ocala SGA collected in May of 2013.Our 2013 NAIP imagery was acquired from aircraft using digital cameras and was mosaicked and separated into digital orthophoto quarter quad tiles (DOQQs) that are roughly 7 km east to west and 8 km north to south [32].Our study area included 1674 DOQQ in Florida, 1008 in Georgia, and 537 in Alabama, totaling 3219 DOQQ (Figure 1).Each state's tiles were mosaicked together on the fly in ESRI ArcMap as a mosaic raster dataset, which allowed us to refer to each state mosaic as a single raster for our analysis. One major challenge to using fine resolution imagery like NAIP for a large geographic extent is the large amount of data, which can be unwieldy and time consuming to process [38].Studies that have used NAIP imagery at large extents have addressed this challenge by limiting classification to a small number of classes that address specific land cover questions [11] in combination with the use of specialized computer hardware [10].However, due to recent advances in image analysis software, we are now able to conduct broad extent fine resolution land cover classification using standard computer hardware more efficiently than was previously possible [34].Specifically, the RMRS Raster Utility toolbar, and its associated ESRI ArcGIS add-in, uses function modeling, batch processing, advanced statistical models, and parallel processing to efficiently produce predictive models and surface outputs for big data applications [36]. Predictive Surfaces In addition to the NAIP spectral information, two texture variables were derived from each NAIP band in a 3 by 3 moving window to quantify the texture values of the cell's immediate neighbors.Other moving window sizes, such as 4 × 4 and 5 × 5, were tested but added little to no significant textural information while increasing the complexity of the model and decreasing the model's efficiency.Texture was quantified using a first-order standard deviation and a second order horizontal contrast gray level co-occurrence matrix (GLCM) for each of the four NAIP bands [39].The textural measurements combined with the spectral values of the NAIP bands comprise the twelve bands that we used in our principal component analysis for each state.PCAs were performed for all data in each state using random samples of 20,859 cells in Alabama, 23,411 cells in Georgia, and 65,730 cells in Florida.Using the PCA models, we transformed the 12 bands derived from NAIP into principal component raster surfaces rescaled to values between 0 and 255.The top six principle components were used as predictive variables in our SNN models. Classified Samples The other input into the SNN models was a sample of visually interpreted locations.We randomly selected 3712 locations within our study area and digitized them as points, with 1640 points in Florida, 1083 in Alabama and 989 in Georgia.Sample points were visually classified into 13 land cover classes (Table 1) by an analyst for a 3 by 3 cell area surrounding the point location.The determination of the classes in Table 1 was driven by the imagery (classes must be visually distinct to a human classifier on NAIP, including the NIR band) and by the project requirements focused on identification of classes relevant to longleaf pine management and conservation.Some classes were easily identifiable including water, bare ground and pavement.Other classes were subdivided.For example, grass was subdivided into green and dry (dormant) grass, due to noticeably different spectral presentation.The dark, light and grey tree crown classes correspond to coniferous, deciduous, and senesced deciduous or dead trees, respectively.Each sample point's spatial coordinates were used to extract the coincident values from the first six principal component surfaces and those values were appended to our visually classified points.SNN models were then developed to predict the probability of a cell being a specific class based on the principal component values derived from the NAIP imagery at that point. Modeling We chose to use SNN classification because it employs a machine learning technique that offers several advantages for use in land cover classifications.Specifically, neural networks are non-parametric and as such they do not assume known distributions of explanatory variables.The softmax function links the neurons in the neural network and produces probabilistic output values, which have been found to be more descriptive and flexible for land cover mapping than discriminant classification outputs [24,40].Similar to various probabilistic multiclass classification methods, including multinomial logistic regression, SNN probabilistic outputs are themselves a per-cell estimation of the mean class probability [24].This allows for easy estimates of model error for any subsequent rules that may be applied to a cell. The classified points and their coincident principle component values were used to train a SNN model for each of the three states within our study area.We applied the SNN models to our principle component surfaces to create three 13 band raster surfaces that estimate the probability of a cell being each of the 13 land cover classes.The outputs were rescaled to integer values between 0 and 100, and saved as a 13-band unsigned 8-bit ERDAS Imagine (.img) file. PCA To minimize the number of dimensions used in our modeling stage, we performed a PCA using NAIP spectral values and texture variables.In total we were able to reduce the dimensionality of the NAIP data and texture derivatives from 12 bands to 6.The top six principle components explained between 92% and 94% of the variation from our twelve input variables (Table 2).There were some common trends in all three PCA eigen vectors.The first two components emphasized red, blue and NIR GLCM contrast values, as well as the green band spectral value.The third component emphasized the green band spectral value along with the NIR spectral value, the red band GLCM value, and to a lesser extent the green and blue band standard deviation values.These three principal components accounted for approximately 78% of the variation in the data in all three PCAs.The remaining components emphasized standard deviation and horizontal GLCM contrast, as well as the NIR spectral value. A total of 3219 six band principal component raster surfaces were created, corresponding to the number of DOQQ tiles covering our study area.NAIP DOQQ tiles were separated into mosaics by state.Each state was processed separately using a state specific PCA model.Processing time to create the principle component rasters was approximately 168 h (1 week).Rasters were processed in parallel across 16 logical cores on one computer using solid state drives and two 3.50 GHz Intel I7 processors.This amounted to roughly 40 min of processing time for each NAIP DOQQ. Modeled Outputs Using the values from the first six principal components and the sample of visually identified classes, we created three SNN models, one for each state within our study area.We used the average error (the difference between the training data values and the modelled values) in modeled probability to assess model fit.The average error of our three models ranged from 8.9% to 9.3% (Table 3).Using these models, and each state's previously generated principle component mosaic, we built multiband probabilistic raster surfaces (13 bands each) for the extent of each DOQQ tile within a state, estimating the probability of each class for every raster cell (Figure 4).Probabilistic raster surfaces were then mosaicked together for each state.This process took approximately 40 min per DOQQ tile, running in parallel across 16 logical cores on a single computer.Total processing time for our entire study area was roughly 6 days.The ability of the probabilistic land cover classifications to differentiate between cover classes is visually demonstrated in Figure 4, where darker shades indicate higher probabilities and lighter shades indicate lower probability.The vertical strip in the bottom middle of the image's extent is a pine plantation and is apparent in the "tree crown dark" band, which is our coniferous cover class (Figure 4a) and its inverse in the "tree crown light" band, which is our deciduous cover class (Figure 4b).In a forested landscape such as this one, the shadow class (Figure 4h) is widespread and closely tied to shadows cast by trees, but has low probability in the fields and bare ground areas.The shrub band (Figure 4c) looks washed out because of the overall low probabilities of shrubs across the area.Distinct features are discernable in several cover classes: fields are distinguishable in the grass bands (Figure 4d,e), roads in the bare ground band (Figure 4f), and the two ponds within the figure's extent appear as the only dark areas in the water band in Figure 4h.The dark areas of higher probabilities that compose these features are visual evidence of the model's classification ability.Comparing these probabilistic classification outputs to the previously available land cover classifications in Figure 2 over the same extent makes the advantage of fine spatial resolution classification visually apparent. The final raster outputs were 1.25 terabytes in total size at 1 m resolution.To facilitate use and distribution of this land cover classification we aggregated and resampled the 1 m outputs to a spatial resolution of 10 m.The aggregation routine calculated the average class probability for each land cover class within 100 square meters (100 cells in a 10 × 10 moving window).The resulting aggregation can be interpreted as the proportion of area each class occupies within that area.Final land cover outputs at both the 1 m and 10 m resolution were re-projected to Albers equal-area conic projection to facilitate accurate area estimates.The 10 m aggregate land cover classification products, along with all the products from the Longleaf Mapping Project, are available online [41]. Example of Use The probabilistic land cover classification outputs in this study were developed to help identify and prioritize sites suitable for longleaf pine restoration.To demonstrate this application, we produced a simple set of rules that use cover percent to identify open pine stands (i.e., pine with large trees widely spaced with vegetative understory) and applied those rules to our land cover outputs to generate a conservation prioritization classification.Because our forest cover classifications do not distinguish between coniferous species, we were unable to directly locate longleaf pine cover explicitly.However, we did identify open pine stand characteristics that are typical of longleaf pine stands using our outputs [42].First, we ran a continuous focal analysis on our land cover classification at the 1 m resolution.The focal analysis assigned the mean values of all cells within a 30 by 30 cell moving window to the central cell for each land cover class.The focal analysis allowed us to identify stand size areas and smooth the outputs while maintaining the 1 m spatial resolution.Then we applied five criteria to our focal analysis that identify open pine stands: pine cover between 3% and 30%, shrub plus grass cover greater than 20%, crop cover less than 20%, building plus pavement cover less than 20%; and water cover less than 25%.The pine and shrub/grass criteria are the primary characteristics associated with open pine stands, while the other three criteria were included to eliminate water bodies, urban areas, and croplands.The resulting classification is visualized in Figure 5. Areas where all five criteria are met are defined as open pine stands.These criteria were effective at identifying areas of open pine stands, as seen in the bottom right of Figure 5, and in patches throughout.While the vast majority of areas within this example identify valid restoration sites, some areas that have forest and grass fields on either side of a road also meet the specified criteria, for example the top right corner of Figure 5. Fortunately, these areas are easily identified and can be quickly removed as potential restoration sites.This example illustrates that fine resolution land cover classification outputs alone can be used to identify and prioritize longleaf restoration sites based on established criteria.Moreover, they could be used as part of a more complex prioritization method, which might include spatial estimates of tree basal area and stand density, or ancillary data such as road layers, parcel ownership maps and digital elevation models (DEM). ISPRS Int.J. Geo-Inf.2018, 7, x FOR PEER REVIEW 10 of 14 quickly removed as potential restoration sites.This example illustrates that fine resolution land cover classification outputs alone can be used to identify and prioritize longleaf restoration sites based on established criteria.Moreover, they could be used as part of a more complex prioritization method, which might include spatial estimates of tree basal area and stand density, or ancillary data such as road layers, parcel ownership maps and digital elevation models (DEM). Discussion This study builds on previous work to demonstrate an adaptable method for overcoming the previously existing barriers to using fine resolution land cover classifications across a broad extent.In addition, our use of probabilistic classifications illustrates the adaptable nature of these types of classifiers and how these surfaces can inform multiple analyses.The software tools used to conduct the analysis make producing probabilistic land cover classifications of this resolution and extent obtainable for a wider audience by reducing processing, programming and memory requirements. Our methodology efficiently produced probabilistic land cover surfaces at 1 m resolution across 11.6 million hectares (28.8 million acres) with less than 10% average error in modeled probability.The probabilistic surfaces can have various user defined rules applied in subsequent analysis such as an MLC rule (Figure 6) to address questions related to where classes are located, but are not limited to just one rule.This provides stakeholders with the flexibility needed to emphasize different characteristics of longleaf pine habitat in restoration planning.Due to the large extent, fine spatial resolution, and adaptability of these probabilistic land cover surfaces, multiple organizations can use these datasets as a common source of information for working to restore longleaf habitat in the region, even if those organizations use the data differently. There were challenges unique to using NAIP imagery as a base predictor dataset.NAIP imagery is processed into a state-wide mosaic that contain seamlines.These seamlines are an artifact of when and how NAIP images were collected and processed.To cosmetically adjust differences in DN values due to acquisition times and processing, NAIP uses a color balanced routine which alters spectral information in the overlapping regions of images [17,32].These aspects of acquisition and processing can increase the classification error of models built using this imagery.Because of how NAIP imagery is acquired and processed, we saw slight distortion of our outputs around the edges of the flight lines, which is likely an artifact of NAIP's color balancing process.These challenges could be avoided by using different base imagery in this classification methodology, such as IKONOS or WorldView, however, these data can be expensive to obtain and would require additional image normalization.Colors correspond to the number of criteria met.The criteria are: pine cover between 3-30%, shrub plus grass cover greater than 20%, crop cover less than 20%, building plus pavement cover less than 20%, and water cover less than 25%; (b) NAIP imagery for the same extent. Discussion This study builds on previous work to demonstrate an adaptable method for overcoming the previously existing barriers to using fine resolution land cover classifications across a broad extent.In addition, our use of probabilistic classifications illustrates the adaptable nature of these types of classifiers and how these surfaces can inform multiple analyses.The software tools used to conduct the analysis make producing probabilistic land cover classifications of this resolution and extent obtainable for a wider audience by reducing processing, programming and memory requirements. Our methodology efficiently produced probabilistic land cover surfaces at 1 m resolution across 11.6 million hectares (28.8 million acres) with less than 10% average error in modeled probability.The probabilistic surfaces can have various user defined rules applied in subsequent analysis such as an MLC rule (Figure 6) to address questions related to where classes are located, but are not limited to just one rule.This provides stakeholders with the flexibility needed to emphasize different characteristics of longleaf pine habitat in restoration planning.Due to the large extent, fine spatial resolution, and adaptability of these probabilistic land cover surfaces, multiple organizations can use these datasets as a common source of information for working to restore longleaf habitat in the region, even if those organizations use the data differently. There were challenges unique to using NAIP imagery as a base predictor dataset.NAIP imagery is processed into a state-wide mosaic that contain seamlines.These seamlines are an artifact of when and how NAIP images were collected and processed.To cosmetically adjust differences in DN values due to acquisition times and processing, NAIP uses a color balanced routine which alters spectral information in the overlapping regions of images [17,32].These aspects of acquisition and processing can increase the classification error of models built using this imagery.Because of how NAIP imagery is acquired and processed, we saw slight distortion of our outputs around the edges of the flight lines, which is likely an artifact of NAIP's color balancing process.These challenges could be avoided by using different base imagery in this classification methodology, such as IKONOS or WorldView, however, these data can be expensive to obtain and would require additional image normalization.Future classification studies using data from multiple years can add a temporal dimension to the analysis, allowing for land cover change studies at this high resolution.In the case of NAIP, this is facilitated by relatively frequent (2-3 year) return intervals.Ancillary inputs, such as DEM, Lidar data, or local high resolution spectral data, such as drone imagery, could be integrated into this methodology to improve specific class estimates and the downstream products that describe and map useful landscape characteristics.While our work has focused on spatial analysis for forest planning, restoration and management, the same methods can be used for land cover classification in many fields such as agriculture, wildlife management, and urban planning. Datasets generated using this approach have a wide range of applications.For example, some forest condition class rules require shrub and grass percent cover thresholds to meet desired conditions [42].Raster surfaces like ours can be easily queried to find locations across landscapes meeting those requirements.Additionally, classification rules such as MLC can be applied to the probabilistic surfaces to create hard classification.Land cover surfaces can be queried to rank areas based on certain characteristics, such as the percent cover of deciduous trees, or in combination with other classes in various weighting schema and ancillary spatial data.Combined with other datasets, such as land ownership data, stakeholders can compare outcomes and efficiencies of various prioritization strategies across large landscapes. Conclusions This project demonstrated a methodology to create a regional 1 m resolution probabilistic land cover classification with low average error outputs using standard computing hardware, ESRI GIS software and newly developed open source software.The fine resolution of the outputs provides land cover information at a resolution that is appropriate for use at the operational scale, such as prioritizing silvicultural treatments on specific ownerships and directing operations within individual forest stands.The probabilistic outputs are more flexible than hard classification outputs Future classification studies using data from multiple years can add a temporal dimension to the analysis, allowing for land cover change studies at this high resolution.In the case of NAIP, this is facilitated by relatively frequent (2-3 year) return intervals.Ancillary inputs, such as DEM, Lidar data, or local high resolution spectral data, such as drone imagery, could be integrated into this methodology to improve specific class estimates and the downstream products that describe and map useful landscape characteristics.While our work has focused on spatial analysis for forest planning, restoration and management, the same methods can be used for land cover classification in many fields such as agriculture, wildlife management, and urban planning. Datasets generated using this approach have a wide range of applications.For example, some forest condition class rules require shrub and grass percent cover thresholds to meet desired conditions [42].Raster surfaces like ours can be easily queried to find locations across landscapes meeting those requirements.Additionally, classification rules such as MLC can be applied to the probabilistic surfaces to create hard classification.Land cover surfaces can be queried to rank areas based on certain characteristics, such as the percent cover of deciduous trees, or in combination with other classes in various weighting schema and ancillary spatial data.Combined with other datasets, such as land ownership data, stakeholders can compare outcomes and efficiencies of various prioritization strategies across large landscapes. Conclusions This project demonstrated a methodology to create a regional 1 m resolution probabilistic land cover classification with low average error outputs using standard computing hardware, ESRI GIS software and newly developed open source software.The fine resolution of the outputs provides land cover information at a resolution that is appropriate for use at the operational scale, such as prioritizing silvicultural treatments on specific ownerships and directing operations within individual forest stands.The probabilistic outputs are more flexible than hard classification outputs and can be used to derive many other data products for land management.Variables such as percent cover, forest composition, impervious surfaces, forested and non-forested land area, and the location of specific classes within any area can be quickly evaluated using a probabilistic land cover classification dataset.The low resource requirements and relatively quick processing time allows for low cost experimentation and provides a powerful new analytical tool for practitioners. Figure 1 . Figure 1.The four significant geographic areas (SGA) that are included in our study area along with the grid of overlapping National Agriculture Imagery Program (NAIP) digital orthophoto quarter quad (DOQQ) tiles. Figure 1 . Figure 1.The four significant geographic areas (SGA) that are included in our study area along with the grid of overlapping National Agriculture Imagery Program (NAIP) digital orthophoto quarter quad (DOQQ) tiles. Figure 2 . Figure 2. Sample images from available public land cover classification products in our study area: (a) Cooperative Land Cover dataset from Florida Natural Areas Inventory; (b) Condition Class for Maintenance of Longleaf dataset from Florida Natural Areas Inventory; (c) National Land Cover Dataset (2011); and (d) National Agricultural Imagery Program (2013), which is the imagery used in this study for probabilistic classification. Figure 2 . Figure 2. Sample images from available public land cover classification products in our study area: (a) Cooperative Land Cover dataset from Florida Natural Areas Inventory; (b) Condition Class for Maintenance of Longleaf dataset from Florida Natural Areas Inventory; (c) National Land Cover Dataset (2011); and (d) National Agricultural Imagery Program (2013), which is the imagery used in this study for probabilistic classification. Figure 2 . Figure 2. Sample images from available public land cover classification products in our study area: (a) Cooperative Land Cover dataset from Florida Natural Areas Inventory; (b) Condition Class for Maintenance of Longleaf dataset from Florida Natural Areas Inventory; (c) National Land Cover Dataset (2011); and (d) National Agricultural Imagery Program (2013), which is the imagery used in this study for probabilistic classification. Figure 5 . Figure 5. (a) Open pine areas prioritized for conservation based on percent cover within a 30 m window.Colors correspond to the number of criteria met.The criteria are: pine cover between 3-30%, shrub plus grass cover greater than 20%, crop cover less than 20%, building plus pavement cover less than 20%, and water cover less than 25%; (b) NAIP imagery for the same extent. Figure 5 . Figure 5. (a) Open pine areas prioritized for conservation based on percent cover within a 30 m window.Colors correspond to the number of criteria met.The criteria are: pine cover between 3-30%, shrub plus grass cover greater than 20%, crop cover less than 20%, building plus pavement cover less than 20%, and water cover less than 25%; (b) NAIP imagery for the same extent. 14 Figure 6 . Figure 6.Example of a high-resolution land classification generated by applying a most likely class (MLC) rule to a 13 band probabilistic land cover classification. Figure 6 . Figure 6.Example of a high-resolution land classification generated by applying a most likely class (MLC) rule to a 13 band probabilistic land cover classification. Table 1 . Land cover classes and descriptions. Table 2 . Cumulative proportion of variation explained by each component for each state's principal component analysis (PCA). Table 3 . Softmax neural network land cover model average error for each state.
8,740.2
2018-03-14T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Physico-chemical characterization of aspirated and simulated human gastric fluids to study their influence on the intrinsic dissolution rate of cinnarizine To elucidate the critical parameters affecting drug dissolution in the human stomach, the intrinsic dissolution rate (IDR) of cinnarizine was determined in aspirated and simulated human gastric fluids (HGF). Fasted aspirated HGF (aspHGF) was collected from 23 healthy volunteers during a gastroscopic examination. Hydrochloric acid (HCl) pH 1.2, fasted state simulated gastric fluid (FaSSGF), and simulated human gastric fluid (simHGF) developed to have rheological, and physico-chemical properties similar to aspHGF, were used as simulated HGFs. The IDR of cinnarizine was significantly higher in HCl pH 1.2 (952 ± 27 µ g/(cm 2 ⋅ min)) than in FaSSGF pH 1.6 (444 ± 7 µ g/(cm 2 ⋅ min)), and simHGF pH 2.5 (49 ± 5 µ g/(cm 2 ⋅ min)) due to the pH dependent drug solubility and viscosity differences of the three simulated HGFs. The shear thinning behavior of aspHGF had a significant impact on the IDR of cinnarizine, indicating that the use of FaSSGF, with viscosity similar to water, to evaluate gastric drug dissolution, might overestimate the IDR by a factor of 100 – 10.000, compared to the non-Newtonian, more viscous, fluids in the human stomach. The developed simHGF simulated the viscosity of the gastric fluids, as well as the IDR of the model drug, making it a very promising medium to study gastric drug dissolution in vitro. Introduction Drug dissolution in the physiological environment of the gastrointestinal tract (GIT) is a prerequisite for drug absorption following oral administration of solid dosage forms.Slow drug dissolution may decrease oral drug absorption (Vardakou et al., 2011), rendering the solubility and dissolution rate of a drug crucial for its in vivo performance (Dressman and Reppas, 2000).Especially in the case of weakly basic drugs, the dissolution rate in the stomach is important, as dissolution may be the rate limiting step for drug absorption (Vertzoni et al., 2005;Vertzoni et al., 2007;Amidon et al., 1995). Over the last 25 years, increased focus has been directed towards assessing dosage form performance in the GIT by using biorelevant dissolution media to study in vitro dissolution, solubility, and solid phase transition behavior of drug compounds and dosage forms.For highly soluble drugs, i.e. biopharmaceutics classification system (BCS) class I and III drugs, simple pharmacopoeial dissolution media, such as 0.1 M hydrochloric acid (HCl) pH 1.2, are considered suitable for evaluating intra-gastric behavior (Markopoulos et al., 2015).However, to evaluate the dissolution behavior of poorly soluble drugs (BCS class II and IV drugs), biorelevant dissolution media better simulating the actual conditions in the GIT have been recommended to accurately predict in vivo behavior (Klein, 2010;Fagerberg et al., 2010). It is well known that pH, surfactants, ionic strength, and buffer capacity influence the dissolution rate of a drug in the GIT (Jinno et al., 2000;Dressman et al., 1998;de Smidt et al., 1987;de Smidt et al., 1994;Gibaldi and Feldman, 1970;Gibaldi et al., 1968;Higuchi, 1964).Additionally, hydrodynamics, flow rates and liquid volumes in the GIT have also been shown to influence the drug dissolution rate (Dressman and Reppas, 2000).Dissolution of a solid substance in its own solution may be described by the Noyes Whitney equation (Eq.( 1)) (Noyes and Whitney, 1897). Where dM/dt is the rate of dissolution, D is the diffusion coefficient, A is the surface area, C S is the saturation solubility, C is the apparent concentration of a drug in the solution at a certain time point, and h is the thickness of the boundary layer around the dissolving particle. The diffusion coefficient of a drug in solution is described by the Stokes-Einstein equation (Eq.( 2)) (Miller, 1924). Where D is the diffusion coefficient of the drug in solution, K B is the Boltzman constant, T is absolute temperature, η is the viscosity of the dissolution medium, and r is radius of the diffusion molecule assuming it is a sphere.Together, Eq. ( 1) and Eq. ( 2) predict that an increased viscosity of the dissolution medium will decrease the intrinsic dissolution rate (IDR) of a drug.Additionally, increased media viscosity has been shown to significantly delay tablet disintegration (Anwar et al., 2005;Parojcic et al., 2008;Radwan et al., 2012), which in turn will decrease the drug dissolution rate, as the surface area of a non-disintegrated tablet is much smaller compared to that of a disintegrated tablet.A previous study by Pedersen et al. showed that the rheological profile of the commonly used gastric dissolution media, i.e. 0.1 M HCl pH 1.2 and fasted state simulated gastric fluid (FaSSGF, pH 1.6), differ from that of human gastric fluid (HGF) due to the presences of lipids, mucins and other macromolecules (Pedersen et al., 2013).Mucin molecules are glycoproteins of high molecular weight (2-14⋅10 6 g/mol), and have been shown to be largely responsible for the viscosity and shear thinning properties of mucus and HGF (Pedersen et al., 2013;Allen et al., 1984;Andrews et al., 2009).However, the importance of the content of mucins and HGF viscosity on drug dissolution has not yet been elucidated.Several simulated gastric media, e.g.FaSSGF and FaSSGF-v2, have been developed, but none of these reflect the viscosity of HGF (Erceg et al., 2012;Galia et al., 1998Galia et al., , 1999;;Kostewicz et al., 2002;Nicolaides et al., 1999;Vertzoni et al., 2005).As the viscosity is known to affect the drug dissolution rate, it may be desirable to use a simulated HGF mimicking both the composition and the rheological properties of HGF, to accurately forecast the gastric dissolution behavior of orally administered drugs.With the aim of elucidating the impact of dissolution medium rheology, the IDR of cinnarizine was determined in aspirated and simulated HGF mimicking the composition and the rheological properties of HGF to different degrees.The physico-chemical properties of cinnarizine are shown in Table 1. Volunteers for the study Human gastric aspirates (aspHGF) were collected from volunteers during gastroscopic examinations at Herlev Hospital, Copenhagen.Volunteers (N = 23) with normal body weight, aged 20 -65 years were included in the study.The volunteers did not eat or drink for six and two hours prior to the study, respectively.Only samples from volunteers that did not have any upper gastrointestinal diseases (known or discovered during the gastroscopic examination) were included in the present study.Smokers, pregnant or breastfeeding women, and volunteers that ingested any medication, food or water on the day of gastric fluid aspiration, were also excluded.The volunteers all gave their written informed consent to the experimental procedure.The study was approved by the Ethical Committee of Denmark and followed the conventions of the Declaration of Helsinki (H-2-2011-073). Handling of aspiration samples The aspHGF was collected in volunteers immediately after introduction of a gastroscope.Aspiration was performed using a conventional gastroscope with a build-in suction mechanism.The gastroscopes used during the procedures had a diameter ranging between 9 and 11 mm and allowed visualization of the distribution of fluid in the stomach.The gastroscope was passed through the mouth and the esophagus into the stomach from where fluid was aspirated (aspHGF).No fluid was used to rinse the gastroscope prior the examination and great care was taken not to flush the endoscope with water before the fluid aspiration was performed.Immediately after collection, the aspHGF samples were stored at − 20 ○ C until use (and no longer than 3 months).Prior to characterization and dissolution studies, the pH of all aspHGF samples was measured, and samples with a pH above 5 were excluded from the study.A total of 17 samples were included in the study.These samples were filtrated to remove larger particles and the sampled volumes were pooled.As cinnarizine is a weak base (Table 1), its solubility and IDR in gastric fluid will depend on the pH of the dissolution medium.The pH of the pooled aspHGF was adjusted to 1.6 with HCl, to be able to study the effect of the viscosity difference between aspHGF and FaSSGF, independent of pH (the pH of aspHGF has previously been reported to be 2.5 ± 1.4) (Pedersen et al., 2013). Preparation of simulated human gastric fluids Six different simulated HGF were utilized in the present study.HCl (0.1 M) pH 1.2 and FaSSGF were included as these media represent the two most commonly used dissolution media to evaluate the gastric drug dissolution in vitro (Vertzoni et al., 2005).As it has been found that the osmolality of FaSSGF does not mimic that of aspHGF (Erceg et al., 2012;Pedersen et al., 2013;Vertzoni et al., 2007), FaSSGF* (resembling FaSSGF-v2) was prepared with a higher concentration of NaCl as compared to FaSSGF, to ensure the osmolality of the simulated gastric medium was comparable to aspHGF (Table 2).FaSSGF* was included in the study to test if a difference in the dissolution medium osmolality influences the IDR of cinnarizine.To study the impact of the rheological properties of aspHGF, on the IDR of cinnarizine in simulated gastric media, simulated human gastric fluid (simHGF) was utilized.simHGF was developed in a previous study by Pedersen et al., to achieve physicochemical and rheological properties similar to aspHGF (Pedersen et al., 2013).In that study, methylcellulose (MC) was found to be a suitable polymer to obtain a simHGF with similar rheological properties to aspHGF.The addition of MC 20.000 mPa⋅s at a concentration of 0.2% (w/v) to simHGF provided a simulated gastric medium with shear thinning properties similar to aspHGF.To study the effect of the viscosity difference between FaSSGF and simHGF, independent of pH, simHGF* was prepared with a pH of 1.6, similar to FaSSGF. To study the influence of sheer-thinning and Newtonian fluids on the IDR of cinnarizine, a Newtonian simHGF (N-simHGF) was prepared using MC with a lower viscosity grade (MC 15 mPa⋅s at a concentration of 0.5 %(w/v)) and thereby molecular weight.Table 2 lists the compositions of FaSSGF, FaSSGF*, simHGF, simHGF* and N-simHGF. FaSSGF and FaSSGF* were prepared by mixing the components listed in Table 2 with purified water.The solution was stirred overnight to ensure complete dissolution of all components, and the pH was adjusted to 1.6 with HCl. As MC quickly gels at room temperature, simHGF was prepared in three steps; (i) the polymer, MC 20.000 mPa⋅s, was added slowly to ice cold purified water under vigorous stirring to form a 0.25 % (w/v) polymer dispersion, (ii) a two-times concentrated simHGF medium (without MC) was prepared by dissolving double the concentration of the components in Table 2 in purified water, and (iii) the ice cold polymer dispersion was mixed with the concentrated simHGF medium.The final mixture was stirred overnight to achieve homogeneity, and the pH was adjusted to 2.5 (for simHGF) and 1.6 (for simHGF*) with HCl. N-simHGF was prepared similarly to simHGF, as described above, however, MC 15 mPa⋅s was used as the polymer in a final concentration of 0.5 %(w/v).The pH was adjusted to 2.5 with HCL. All the prepared simulated gastric fluids were stored at 5 • C for up to 48 h, until used. Characterization of aspirated and simulated gastric fluids The pooled aspHGF samples and the various prepared simulated gastric fluids were characterized in terms of pH, osmolality, surface tension, bile salt concentration, protein content, and rheological behavior.The characterization studies were performed to confirm that the simHGF, designed based on a previous batch of aspHGF (Pedersen et al., 2013), did in fact simulate the aspHGF, as well as to help interpret the IDR results. The pH values were measured by a pH electrode Metrohm (Switzerland) connected to a PHM 220 pH-meter (Radiometer, Brønshøj, Denmark). Surface tension was determined by the pendant drop method on a KRÜSS DSA100 Drop Shape Analysis System (KRÜSS GmbH, Hamburg, Germany) connected to a Julabo ED-5 Open Bath Circulator (Julabo Labortechnik, Seelbach, Germany).The temperature was kept at 37 • C. A needle with a diameter of 1.825 mm was used and a drop of 10 ± 0.5 μL was analyzed.A measurement was conducted every 10 s for a maximum of 30 min or until the measurements indicated a stabile surface tension. The total bile salts concentration was quantified using a Total Bile Acids Assay Kit (Diazyme Laboratories Inc., Poway, CA, USA), and following the instructions of the manufacturer. The total protein content was quantified using a Bicinchoninic Acid protein Kit (Sigma Aldrich, Spruce, St. Louis, USA) with bovine serum albumin as a standard.The assay was carried out following the instructions of the manufacturer. 2.2.4.1.Rheology.Rheological characterization of the pooled aspHGF samples and the different simulated gastric media was conducted on an AR-G2 rheometer, (TA Instruments, Waters Corporation, New Castle, DE, USA) using a cone and plate geometry.All measurements were performed at 37 • C with a 40 mm aluminum steel cone (TA Instruments, Waters Corporation, (New Castle, USA)) at a gap of 33 μm.To limit evaporation, a protective casing, custom fabricated at the Department of Pharmacy, Faculty of Health and Medical Science (Copenhagen, Denmark), was attached onto the fixed plate and the edge of the liquid sample was covered with 5 mL of low-viscous poly(dimethylsilcoxane) oil after lowering of the cone to the measurement gap. The sample (350 µL) was allowed to equilibrate for 10 min at 37 • C before measurements were conducted.Three consecutive measurements were conducted to determine the rheological properties of the pooled aspHGF and the different simulated gastric media.Inertia dominated measurements were excluded in the data evaluation. The apparent viscosity of the samples was measured as a function of the shear rate from 0.001 to 1000 s − 1 .The tolerance was set to 5 %.The maximum measuring time for each shear rate was set to 3 min.Measurements not reaching equilibrium within the 3 min were not taken into account. Dissolution studies The IDR of cinnarizine in the aspHGF and the different simulated gastric fluids were measured using the μDISS Profiler™ from pION Inc. (Woburn, MA, USA), containing six dip probes connected to a UV-Vis spectrophotometer.Absorbance of cinnarizine was measured in the wavelength range of 240 -265 nm.The vial holder was preheated to 37 ○ C with a Julabo Open Bath Circulator (Julabo Labortechnik GmbH, Seelbach, Germany).In each experiment (n ˃ 3), IDR was determined in 6 mL of aspHGF or 10 mL of simulated gastric media.The probes were centre-positioned in the vials.A 5 mm path length of the in situ UV probe was used.Each channel was calibrated with its own standard curve prior to the experiment.To calibrate, a stock solution of cinnarizine in ethanol was prepared, and increasing amounts of stock solution were added to the dissolution medium and the absorbance was measured.A Mini-IDR™ compression system (Heath scientific, Buckinghamshire, United Kingdom) was used to prepare miniaturized compacts with a constant surface area.Stainless steel dies containing a cylindrical hole with an area of 0.071 cm 2 were filled with pure drug powder and compressed at a compression pressure of approximately 35 bar for 30 s.Each die was then inserted into a Teflon rotating disk carrier and placed on the bottom of the glass vials. The dissolution medium was transferred to each vial and the magnetic stirring system was turned on.The stirring rate was 50 rpm or 150 rpm.The absorbance was measured once every 10 s for 20 min.Parafilm was placed over the opening of the vials to avoid evaporation of media. 2.2.5.1.Fitting of curves using μDISS profiler software.To determine the IDR, all dissolution curves were analysed using a second derivative areaunder-curve method.A bi-exponential function (Eq.( 3)) was selected to describe the dissolved concentration as a function of time while accounting for loosely packed powder on the tablet surface. where C powder and C compact refer to the concentration of drug in the final saturated solution due to the contribution of the powder burst and compact dissolution.The surface areas associated with the loose powder and the compact are A powder and A compact , respectively.P ABL refers to the permeability across the aqueous boundary layer and t 0 is the lag time. To determine the IDR, the five parameters: C powder , C compact , A powder, A compact and t 0 , was determined.A compact was kept constant at 0.071 cm 2 and should in theory be constant throughout the measurement.The remaining parameters were determined by the software after curve fitting to obtain a low R 2 value, low residuals and a low standard deviation. where DR max is the maximum dissolution rate and A eff is the effective surface area of drug exposed to the dissolution medium calculated by the μDISS Profiler™ software. Data analysis All data are expressed as mean and standard deviation (SD), except for the rheological characterization data, which is expressed by a single determination.To determine if there was any statistical significant difference between the IDR of cinnarizine in the different simulated gastric media, the results were analyzed by a single sided analysis of variance followed by a Bonferroni posttest. Results and discussion The term biorelevant dissolution media has been used to describe the already existing simulated gastric and intestinal media developed based on in vivo chemical characterization results for both gastric and intestinal fluids (e.g.SGF, FaSSGF, and FaSSIF) (Dressman et al., 1998;Galia et al., 1998Galia et al., , 1999;;Vertzoni et al., 2005).However, as these media do not mimic the physical properties, such as the rheological behavior of the human fluids, they can only be regarded as "chemically relevant" dissolution media.Since both the chemical and physical characteristics of the in vivo GIT fluids may affect drug dissolution, the rheological characteristics of the human GIT fluids, and the simulated biorelevant dissolution media should be taken into account when studying drug dissolution.E.g., it is hypothesized that shear thinning behavior, can affect the IDR of a drug, as the viscosity of the dissolution media changes with different motility patterns or stirring rates used during measurement.In this study the IDR of cinnarizine was studied in aspHGF, as well as in various dissolution media simulating HGF to various degrees.Prior to the dissolution studies, the dissolution media were characterized with respect to pH, osmolality, surface tension, bile salt concentration, protein content, and rheological behavior. Characterization of the aspirated and simulated gastric fluids Table 3 lists the measured pH, osmolality, surface tension, bile salt concentration, and protein content of the pooled aspHGF, FaSSGF, FaSSGF*, simHGF, simHGF* and N-simHGF.As a reference, the values originally reported by Pedersen et al., and used to design the simHGF (Pedersen et al., 2013), are also reported in Table 3. When comparing the present aspHGF with the reference aspHGF (Pedersen et al., 2013), no significant differences are observed for the osmolality, surface tension, bile salt concentration and protein content (Table 3).The pH was different, however, this difference was induced in order to be able to compare the IDR in aspHGF to that in FaSSGF at the same pH. The osmolality, surface tension, and bile salt content of simHFG, N-simHGF, and simHGF* correspond to that of aspHGF (reference), and aspHGF (pooled), respectively (Table 3).The protein content of each of the simulated gastric media was kept constant at 0.1 g/L, which is equivalent to the protein content of FaSSGF and corresponding to the pepsin content (Table 2).The protein content determined by Pedersen et al. (2013) and in the pooled aspHGF in the present study was significantly higher (4.9 ± 1.0 g/L and 3.6 ± 0.7 g/L, respectively) compared to the protein content in the simulated gastric media (Table 3) (Pedersen et al., 2013).However, it was decided not to increase the protein content, as the observed small difference was not expected to influence the viscosity of the media, but could possibly change the surface tension in the media, which was unwanted. The osmolality of FaSSGF was significantly lower than the osmolality in aspHGF (reference and pooled), simHGF and simHGF* (Table 3).However, FaSSGF* had an osmolality which was identical to those of aspHGF and simHGF.The surface tension of FaSSGF and FaSSGF* was higher than that of aspHGF and simHGF due to the relatively low amount of bile salts added to this medium. Rheological characterization of the aspirated and simulated gastric fluids Fig. 1A shows that the shear viscosity profile of the pooled aspHGF and the simHGF was similar and both within the shear viscosity range previously measured in aspHGF by Pedersen et al. (Pedersen et al., 2013).Pooled aspHGF and simHGF showed shear thinning behavior from 0.01 s − 1 to 177 s − 1 and 0.003 s − 1 to 100 s − 1 , respectively, followed by a plateau at higher shear rates.At shear rates above 177 s − 1 the shear viscosity of aspHGF was lower than that of simHGF (Fig. 1B).In a study by Bennett et al. the shear rates corresponding to the mixing pattern of the antrum were determined using magnetic resonance imaging and locust bean gum.It was found that shear rates in the range of 30-60 s − 1 corresponded the forces applied to drug delivery systems (locust bean gum) in the antrum of the fasted stomach (indicated in Fig. 1B) (Bennett et al., 2009).The shear viscosity of simHGF was measured to be 2.7 ± 0.2 mPa⋅s at a shear rate of 56.2 s − 1 .At this shear rate, the shear viscosity of pooled aspHGF was lower than the simHGF, i.e. 2.0 ± 0.1 mPa⋅s.However, both simHGF and pooled aspHGF had a higher shear viscosity than FaSSGF (1.1 ± 0.0 mPa⋅s) and 0.1 M HCl (1.1 ± 0.3 mPa⋅s).The rheological profile of simHGF* was identical to that of simHGF (and was therefore not shown in Fig. 1).N-simHGF containing 0.5 %(w/v) MC 15 mPa⋅s showed Newtonian behavior at shear rates above 5.6 s − 1 , where its viscosity was independent of the shear rate.The viscosity of N-simHGF at a shear rate of 56.2 s − 1 was 2.0 ± 0.0 mPa⋅s. The osmolality of dissolution media has previously been shown to impact on drug dissolution (Jantratid et al., 2008;Streng et al., 1984).Ionic strength can have an impact on the interaction of the protonated compounds and taurocholate by facilitating the formation of insoluble salts and this interaction might have a significant impact on the solubility and thereby the dissolution profile (Erceg et al., 2012;Reppas and Vertzoni, 2012;Streng et al., 1984;Vertzoni et al., 2005).FaSSGF* was developed with an osmolality identical to aspHGF, in order to evaluate whether an increased osmolality would affect the IDR of cinnarizine.The IDR of cinnarizine measured in FaSSGF* (426 ± 7 µg/(cm 2 ⋅min)) and FaSSGF (444 ± 7 µg/(cm 2 ⋅min)) were similar, and thus no effect of osmolality was observed (Fig. 2). No significant differences were observed between the IDR of cinnarizine in simHGF* and aspHGF due to similar contents, characteristics and rheological behavior of these two media (Figs. 1, 2, and Table 3).However, the IDR of cinnarizine in aspHGF and simHGF* was significantly lower (approximately 50 %) than in FaSSGF and FaSSGF*, presumably due to the differences in the viscosity (Figs. 1 and 2).This is in accordance with the Noyes Whitney and Stoke Einstein equation (Eq.( 1) and ( 2)) as the viscosity of simHGF* and pooled aspHGF was measured to be approximately twice the viscosity of FaSSGF and FaSSGF*. Influence of Shear-thinning and Newtonian fluids on the IDR of cinnarizine In order to evaluate the effect of the shear thinning properties of aspHGF and simHGF on the IDR of cinnarizine, the IDR was determined in the shear thinning media simHGF, as well as in the Newtonian fluids N-simHGF and FaSSGF.To evaluate the shear thinning behavior, the IDR experiments were performed at two different shear stresses, using stirring rates of 50 rpm and 150 rpm.Fig. 3 shows the measured IDRs of cinnarizine in simHGF, N-simHGF and FaSSGF, with stirring rates of 50 and 150 rpm. As shown in Fig. 3, the IDRs of cinnarizine measured in simHGF at shear rates of 50 rpm (49 ± 5 µg/(cm2⋅min)) and 150 rpm (67 ± 2 µg/ (cm2⋅min)) were significantly different (p < 0.001), due to the shear thinning properties of simHGF (Fig. 1).The corresponding viscosity differences in the simHGF at the different shear rates were measured to indicates the shear rate interval, which is comparable to the forces applied to drug delivery systems in the antrum of the fasted stomach (Bennett et al., 2009).be approximately 2.7 mPa⋅s and 2.2 mPa⋅s at 50 rpm and 150 rpm, respectively (Fig. 1B).There was no significant difference in the IDR of cinnarizine measured in N-simHGF at shear rates of 50 rpm (64 ± 9 µg/ (cm2⋅min)) and 150 rpm (69 ± 1 µg/(cm2⋅min)).This is in agreement with theory, as the viscosity is independent on the shear rate in a Newtonian medium and thereby the diffusion coefficient is constant, resulting in an unchanged IDR.A similar trend was observed when measuring the IDR of cinnarizine in the Newtonian FaSSGF where the IDR was determined to be 444 ± 7 µg/(cm2⋅min) and 429 ± 6 µg/ (cm2⋅min) at shear rates of 50 rpm and 150 rpm, respectively.The motility pattern varies with location in the human stomach even in the fasted state (Bennett et al., 2009;Hasler, 2009).In the proximal part of the stomach (fundus) slow and sustained contractions are present (0.5 cm/s proximal body), while in the antrum distally of the stomach contractions occur more often (4 cm/s) (Hasler and Yamada, 2009).It has been found that the fundus motility correspond to non or very slow shear rates of approximately 0.1 s − 1 whereas the antrum motility corresponds to shear rates of 30-60 s − 1 (Bennett et al., 2009).The motility differences in the stomach have been shown to affect the viscosity of aspHGF due to the shear thinning properties (Allen et al., 1984;Pedersen et al., 2013).Thus, different location of a dosage form in the stomach after administration might have a significant impact on the dissolution rate of the drug and thereby on the absorption especially for poorly soluble drugs.Large differences of the dissolution behavior are expected to occur depending on whether the drug dissolution will be in the fundus or antrum of the stomach due to differences in motility pattern and thus, large viscosity differences.Using FaSSGF to evaluate fundic drug release might overestimate the IDR by a factor of 100-10.000,due to viscosity differences between aspHGF and FaSSGF in the range 100-10.000mPa⋅s under conditions as in the fundus. Conclusions When evaluating the dissolution rate of weakly basic drugs in the fasted stomach, parameters such as pH, viscosity and motility pattern should be considered.Even small pH differences in simulated human gastric fluids lead to significant differences in the IDR of the weakly basic model drug, cinnarizine.The different motility patterns present in the human stomach might induce significant differences in dissolution of weakly basic drugs due to the shear thinning properties of aspHGF.Thus, the use of Newtonian FaSSGF, with viscosity similar to water, to evaluate drug dissolution might overestimate the fundic drug IDR by a factor of 100-10.000,compared to the non-Newtonian, more viscous, fluid in the human stomach.Therefore, the application of media with viscosity as gastric fluids are recommended to evaluate gastric drug dissolution in vitro. Fig. 3 . Fig. 3. Intrinsic dissolution rates obtained for cinnarizine in simHGF, N-simHGF and FaSSGF, with stirring rates of 50 and 150 rpm.Please note that simHGF 2.5 and the FaSSGF 1.6 at 50 rpm are also included in Fig. 2, and are added to Fig. 3 to facilitate comparison. Table 1 Physico-chemical characteristics of the model compound cinnarizine. Table 2 Composition of the utilized simulated human gastric media. ǂ Due to batch variations, the concentration of the MC 20.000 mPa⋅s was changed from 0.2 %(w/v) to 0.125 %(w/v) to obtain a rheological profile of simHGF comparable to aspHGF in the present study, see Fig.1.P.B.Pedersen et al. Table 3 Composition and characteristics of the aspirated and the simulated gastric fluids. a Set values.
6,388.2
2022-05-01T00:00:00.000
[ "Medicine", "Biology" ]
Blind Estimation of the PN Sequence of A DSSS Signal Using A Modified Online Unsupervised Learning Machine Direct sequence spread spectrum (DSSS) signals are now widely used in air and underwater acoustic communications. A receiver which does not know the pseudo-random (PN) sequence cannot demodulate the DSSS signal. In this paper, firstly, the principle of principal component analysis (PCA) for PN sequence estimation of the DSSS signal is analyzed, then a modified online unsupervised learning machine (LEAP) is introduced for PCA. Compared with the original LEAP, the modified LEAP has the following improvements: (1) By normalizing the system state transition matrices, the modified LEAP can obtain better robustness when the training errors occur; (2) with using variable learning steps instead of a fixed one, the modified LEAP not only converges faster but also has excellent estimation performance. When the modified LEAP is converging, we can utilize the network connection weights which are the eigenvectors of the autocorrelation matrix of the DSSS signal to estimate the PN sequence. Due to the phase ambiguity of the eigenvectors, a novel approach which is based on the properties of the PN sequence is proposed here to exclude the wrong estimated PN sequences. Simulation results showed that the methods mentioned above can estimate the PN sequence rapidly and robustly, even when the DSSS signal is far below the noise level. In Reference [8], Warner et al. firstly used triple correlation function (TCF) to estimate the spreading code; since then, many algorithms based on it have been proposed for PN sequence estimation [9][10][11][12][13]. These TCF-based methods can be used not only for PN sequence estimation, but also for detecting DSSS signals. They can work well in good conditions, but as the environment gets worse, the performance of these algorithms deteriorates sharply. In Reference [14], Burel et al. introduced an algorithm to PN sequence estimation of the DSSS signal based on eigenvalue decomposition (EVD). In this method, the received DSSS signal is sampled and divided into temporal windows, the size of which is the PN sequence period. Each window provides a vector for the eigenanalysis, then the PN sequence can be estimated by the eigenvectors. 2 of 12 In Reference [15], the sampled DSSS signal was divided into continuous non-overlapping temporal vectors with a width of two periods of PN sequence; then, the PN sequence could be estimated by employing the EVD method after the average correlation matrix was calculated. In Reference [16], Qui et al. proposed a PN sequence estimation method. In this method, the signal was divided into a series of overlapping windows with the width much shorter than the information symbol width, and then the segments of the spreading code were estimated using the EVD method and a complete spreading code was obtained from the estimated segments. Although these EVD based methods [14][15][16][17][18] show excellent performance in low signal to noise ratio (SNR) conditions, they become expensive in terms of computing when the length of the PN sequence is long. In order to solve this problem, many PN sequence estimation methods based on neural network have been proposed. For example, in Reference [19], Dominique et al. introduced a subspace-based PN sequence estimation algorithm for DSSS signals using a simplified Hebb rule, which can reduce the number of computations required compared to the existing Hebb-based sequence estimator. The main advantage of these Hebb estimators is that the estimator architecture is very simple and can be implemented easily, but the constant small learning steps severely limit their performance. In Reference [20], Chen et al. proposed a modified online supervised learning machine (LEAP) to extract multiple principal components. In other words, the LEAP can be used for principal component analysis (PCA). This algorithm is adaptive to nonstationary input and requires no knowledge of, or when, the input changes statistically. Since it requires little memory or data storage, the LEAP is very suitable for use in engineering. However, the constant small learning step also severely limits its performance. Although the convergence speed of the LEAP can be accelerated when using a large learning step, the LEAP often fails to converge to the global optimum point and the large learning step may damage the stability of the system. Based on these mentioned above, in this paper, we propose a modified LEAP algorithm and apply it into the PN sequence estimation of the DSSS signal. Compared to the original LEAP, the modified LEAP uses variable learning steps instead of a fixed one, which can greatly improve the convergence performance of the network. Namely, the modified LEAP first makes the network close to the optimum convergence point with large learning steps when the network starts training, then when the network approaches the optimum convergence point, small learning steps are used, thus ensuring the network converges to the best point. Meanwhile, in order to maintain the stability of the network when using variable learning steps, the state transition matrices of the system are normalized. In summary, our main contributions lie in the following folds: (a) The LEAP algorithm is applied into the field of the PN sequence estimation of DSSS signals and a modified LEAP algorithm is proposed. Compared to the original LEAP algorithm, the modified LEAP algorithm has a better convergence performance due to its use of variable learning steps rather than a fixed one; (b) Since the phase of the eigenvector can be inverted, the incorrect estimation of the PN sequence of the DSSS signal may be obtained. Based on this, a novel approach which makes full use of the correlation characteristics of the PN sequence is proposed here to solve this problem. This paper is organized as follows. In Section 2, firstly, the mathematical model of DSSS signals and the principle of PCA for PN sequence estimation are given, then a modified LEAP is introduced. The method for PN sequence estimation and the elimination of phase ambiguity are described in Section 3. The main steps for PN sequence estimation of DSSS signals are described in Section 4. Simulation results are presented in Section 5. Finally, a conclusion is drawn in Section 6. DSSS Signal Model In a DSSS transmission, the symbols are multiplied by a PN sequence, which spreads the bandwidth. In this paper, we use the notations below [14]: The convolution of the transmission filter, the channel filter (which represents the channel echoes) and the receiver filter. {c m , m = 0, 1, 2, · · · , N − 1}: The PN sequence. N: The length of the PN sequence. T p : The symbol period. T c : The chip period (T c = T p /N). h(t): The convolution of the PN sequence with all the filters of the transmission chain (transmitter filter, channel echoes, and receiver filter): h: The vector containing the samples of h(t). s(t): The DSSS baseband signal at the output of the receiver filter: a l : The message symbols. v(t): The noise at the output of the receiver filter, which is uncorrelated with the signal. Then, the baseband signal at the output of the receiver filter can be written as: The Principle of PCA for PN Sequence Estimation Since many algorithms can estimate T p and T c , they are assumed to be obtained in advance in this paper [15]. After being sampled (the sampling rate is 1/T c ) and passed through an observation window with duration T p , the received DSSS signal x(t) can produce a series of observed sample vectors after each interval T p . Let us note x(k) the content of a window, then the x(k) can be modeled as: where the dimension of x(k) is N = T p /T c , k represents the discrete time. Usually, the observation widow has a random time delay T x , which is the desynchronization between windows and symbols 0 ≤ T x < T p . Therefore, s(k) generally contains two consecutive message symbol bits, and s(k) can be written as: where a k , a k+1 are the two consecutive message symbols. h 1 is a vector containing the end (duration T p − T c ) of the spreading waveform h(t), followed by zeros (duration T x ). h 2 is a vector containing the zeros (duration T p − T x ) followed by the beginning (duration T x ) of the spreading waveform h(t). Let e i = h i / h i , i = 1, 2, we can obtain: where e i , i = 1, 2 are orthonormalized vectors and δ(·) is the Dirac function. Then, the x(k) can be expressed as follows: x(k) = a k h 1 e 1 + a k+1 h 2 e 2 + v(k). Sensors 2019, 19, 354 4 of 12 Using the equations above, the autocorrelation matrix R x of the DSSS signal can be obtained by: where E{·} denotes expectation, σ 2 n is the variance of the noise, η = σ 2 s /σ 2 n , σ 2 s is the variance of s(k), and I is an identity matrix of dimension N × N. From Equation (8), it is clear that two eigenvalues will be larger than the others when T x > 0, and according to their corresponding eigenvectors, which can be obtained by PCA, the PN sequence can be estimated. Mathematical Model of The Modified LEAP The LEAP is implemented on a neural network with linear units shown in Figure 1. Specifically, in many practical applications, M N. Let x(k) denote the input vector process, then the network's input-output relation can be written as [20]: where where , = 1,2 are orthonormalized vectors and (•)is the Dirac function. Then, the ( )can be expressed as follows: Using the equations above, the autocorrelation matrix of the DSSS signal can be obtained by: where {•} denotes expectation, 2 is the variance of the noise, = 2 2 ⁄ , 2 is the variance of ( ) , and is an identity matrix of dimension × . From Equation (8), it is clear that two eigenvalues will be larger than the others when > 0 , and according to their corresponding eigenvectors, which can be obtained by PCA, the PN sequence can be estimated. Mathematical Model of The Modified LEAP The LEAP is implemented on a neural network with linear units shown in Figure 1. Specifically, in many practical applications, ≪ . Let ( ) denote the input vector process, then the network's input-output relation can be written as [20]: where ( ) is the connection weight from the jth input to the ith output neuron, all at discrete time . Supposing that has eigenvalues 1 > 2 > ⋯ > > 0 with corresponding normalized eigenvectors , = 1,2, ⋯ , then The LEAP for connection weight updating is the following nonlinear non-autonomous dynamical vector difference equations: for = 1,2, ⋯ , and: The LEAP for connection weight updating is the following nonlinear non-autonomous dynamical vector difference equations: for i = 1, 2, · · · , M, and: β is the constant learning step. In Equation (10), the A i and B i can be seen as the state transfer matrices of the system, which are important "de-correlation" terms for performing Gram-Schmidt orthogonalization among all connection weights at each iteration. One could think of the term y i x as the so-called Hebbian learning, for which the strengthening of the connection weights is proportional to the input-output correlation. According to the theory in Reference [20], in the original LEAP, it is known that the learning step β should be small enough, otherwise the system performance can be severely degraded and training errors may occur, thus making the system unstable. Namely, there is an equilibrium point: If the learning step β exceeds this point, the system will be unusable. Therefore, in practice, the learning step is set as small as possible in the original LEAP, but it greatly increases the time required for network convergence. On the basis of the reasons above, a modified LEAP is proposed. First, the learning step β is modified to: where: for i = 1, 2, · · · , M, 0 < α < 1, γ > 0, |·| denotes taking the absolute value, max{·} denotes taking the maximum value. Where α and γ are the weight coefficients in the variable learning steps and they are similar to the weight coefficients in the variable step size least mean square (LMS) algorithm. Compared to those of the original LEAP algorithm, the learning steps of the modified LEAP algorithm can be adaptively changed according to the output of the network. Namely, when the network starts training, the difference between λ i (k) and λ i (k − 1) is large; the network uses large learning steps at this time, and when the network is about to converge, λ i (k) and λ i (k − 1) are approximately equal, and the network uses small learning steps in this case. In this way, the modified LEAP can not only have fast convergence speed, but also get good steady-state performance. In Equation (13), the | from becoming too large, which may be caused by the training errors etc., thus making the learning steps become too large and seriously affecting the convergence speed of the network. However, when the variable learning steps are used, since the learning step size of the network of the modified LEAP is slightly large at the beginning of the training, the step size may still exceed the equilibrium point mentioned in Reference [20]. At this time, the uncorrelation between the connection weights of the network may be destroyed, which results in the instability of the system. Therefore, the state transition matrix of the system is normalized here, that is: where · F denotes the Frobenius norm. It is obvious that Ai(k) F is a compatible matrix norm, then 0 < λ(A i (k)) ≤ 1, 0 < λ(B i (k)) ≤ 1 and the normalization does not affect the decorrelation function of the matrices A i , B i . Therefore, according to the stability criterion of Liapunov [21,22], we can maintain the system stability, when the training errors damage the uncorrelation between the connection weights w i . Asymptotic Stability Analysis of The Modified LEAP Since β i (k) is small when k → ∞ , and A i (k) F ≥ 1, we can obtain this approximation: for i = 2, 3, · · · , M. It can be easily shown that [20]: is an equilibrium point of Equation (10). Let g i (k) = w i (k) − e i , for i = 1, 2, · · · , M, we can get the following approximations: for i = 1. for i = 2, 3, · · · , M. Here, ς i = µβ i (k), µ is a positive integer. Then, Equations (18) and (19) can also be written as: where: and because: D 1 can be written as: According to Equation (23), it is obvious that D 1 's eigenvectors are {e 1 , e 2 , · · · e N } with corresponding eigenvalues: Similarly, we can know that all D i , i = 2, 3, · · · , M have the same eigenvectors {e 1 , e 2 , · · · e N }, and their corresponding eigenvalues are: On the basis of Equations (24) and (25), it is clear that all D i 's eigenvalues are negative. The magnitudes of all eigenvalues of I + ς i D i will be less than 1, if ς i is small enough, for i = 1, 2, · · · , M. Therefore, the equilibrium point given by Equation (17) is asymptotically stable in the sense of Liapunov [21,22], i.e., there exists a neighborhood of the point that any solution initially in this neighborhood will converge to the equilibrium point as k → ∞ . PN Sequence Estimation and The Elimination of Phase Ambiguity When the modified LEAP is converging, which means: where ε represents a threshold for judging whether the network is converging, 0 < ε 1, i = 1, 2, · · · , M, then concatenating e 1 (k) and e 2 (k). Because the phase of eigenvectors can be inverted, four estimated sequences b i , i = 1, 2, · · · , 4 can be obtained. Then, the estimated PN sequence of the DSSS signal can be calculated by: where: PN i is a vector of length N. In order to select the true estimated PN sequence fromP N i , i = 1, 2, · · · , 4, the correlation characteristics of the PN sequence of the DSSS signal can be used. Namely, the PN sequence has good autocorrelation and bad cross-correlation. Hence, let us define a correlation factor ψ: where τ = 0, 1, 2, · · · , N − 1 denotes the discrete time delay and i = 1, 2, · · · , 4, * denotes the conjugate operation. Obviously, the smaller the ψ is, the worse the cross-correlation of the PN sequence is. Then, the correct index of the true estimated PN sequence amongP N i , i = 1, 2, · · · , 4 can be calculated by: where min{·} denotes taking the minimum value. Then, according to Equation (30), the true estimated PN sequence isP N id . In addition, in the absence of other prior information, the overall phase ambiguity of theP N id cannot be eliminated, which means the true PN sequence could be eitherP N id or −P N id . For example, if we know that this PN sequence is the m-sequences, then we can eliminate the overall phase ambiguity of the estimated PN sequence according to the equalization characteristics of the m-sequences; namely, in the m-sequences, the number of 1 is one more than the number of −1. The Main Steps for PN Sequence Estimation To be more specific, the main steps involved in this paper for PN sequence estimation of the DSSS signal are summarized as follows: Step 1. Sample the received DSSS signal, then obtain x(k), k = 1, 2, 3, · · · . Meanwhile, in order to improve the system robustness, the neural network input x(k) should be normalized as follows: where: Step 2. Setting the initial value of w i , i = 1, 2, · · · , M, which are often random numbers between −1 and 1, then normalizing: Step 3. According to Equations (10)- (15), updating the weight vectors w i , i = 1, 2, · · · , M. Step 4. Extracting the eigenvectors corresponding to the largest and second largest eigenvalues, when the neural network is converging. Subsequently, concatenating the two eigenvectors, then according to Equations (26)-(30), the PN sequence of the DSSS signal can be estimated. Then, the blind PN sequence estimation method of the DSSS signal proposed in this paper is derived. Simulations and Analysis To verify the capability of the proposed method, simulation results are presented in this section. Here, the DSSS signal is generated using a random sequence of length 31 (it is one of the m-sequences); then, for completeness, we shall set N = M = 31. The symbols belong to a BPSK constellation (binary phase shift keying). The noise is additive white Gaussian noise, which is uncorrelated with the DSSS signal. T x /T c = 10. Figure 2 shows the true PN sequence of the DSSS signal. Step 4. Extracting the eigenvectors corresponding to the largest and second largest eigenvalues, when the neural network is converging. Subsequently, concatenating the two eigenvectors, then according to Equations (26)-(30), the PN sequence of the DSSS signal can be estimated. Then, the blind PN sequence estimation method of the DSSS signal proposed in this paper is derived. Simulations and Analysis To verify the capability of the proposed method, simulation results are presented in this section. Here, the DSSS signal is generated using a random sequence of length 31 (it is one of the msequences); then, for completeness, we shall set 31 . The symbols belong to a BPSK constellation (binary phase shift keying). The noise is additive white Gaussian noise, which is uncorrelated with the DSSS signal. 10 ⁄ . Figure 2 shows the true PN sequence of the DSSS signal. The estimated eigenvalues of the autocorrelation matrix of the DSSS signal are shown in Figure 3, and the normalized eigenvectors e 1 , e 2 . corresponding to the largest and second largest eigenvalues are shown in Figure 4a,b. Both of them are estimated by the modified LEAP (α = 0.9, γ = 2) and the SNR is −5 dB. Then, the two eigenvectors shown in Figure 4 are concatenated. Because the phase of eigenvectors can be reversed, we can get two different estimated sequences and shown in Figure 5 (here, we regard and as the same). Then, according to Equation (27), the estimated PN sequences are: which are shown in Figure 6. On the basis of Equation (29), in this simulation, 1 102   , 2 94   . , the true estimated PN sequence is , which is the same as the true PN sequence of the DSSS signal shown in Figure 2. By now, the above simulation results show the validity of the proposed method, even with very low SNR. Then, the two eigenvectors shown in Figure 4 are concatenated. Because the phase of eigenvectors can be reversed, we can get two different estimated sequences b 1 and b 2 . shown in Figure 5 (here, we regard b and −b as the same). Then, according to Equation (27), the estimated PN sequences are: which are shown in Figure 6. On the basis of Equation (29), in this simulation, ψ 1 = 102, ψ 2 = 94. Since ψ 1 > ψ 2 , the true estimated PN sequence isP N 2 , which is the same as the true PN sequence of the DSSS signal shown in Figure 2. By now, the above simulation results show the validity of the proposed method, even with very low SNR. sequences are: which are shown in Figure 6. On the basis of Equation (29), in this simulation, 1 102   , 2 94   . Since 1 2    , the true estimated PN sequence is , which is the same as the true PN sequence of the DSSS signal shown in Figure 2. By now, the above simulation results show the validity of the proposed method, even with very low SNR. (a) (b) Figure 7 shows the relationship between the number of iterative steps required by the modified LEAP and the original LEAP to make the network converge at different learning steps as the SNR changes. Figure 8 shows the relationship between the correct estimation probability of the PN sequence and SNRs, when using the modified LEAP and the original LEAP at different learning steps as well as the TCF-and EVD-based methods. Both tests use 1000 Monte Carlo simulations. Figure 7 shows the relationship between the number of iterative steps required by the modified LEAP and the original LEAP to make the network converge at different learning steps as the SNR changes. Figure 8 shows the relationship between the correct estimation probability of the PN sequence and SNRs, when using the modified LEAP and the original LEAP at different learning steps as well as the TCF-and EVD-based methods. Both tests use 1000 Monte Carlo simulations. Figure 7 shows the relationship between the number of iterative steps required by the modified LEAP and the original LEAP to make the network converge at different learning steps as the SNR changes. Figure 8 shows the relationship between the correct estimation probability of the PN sequence and SNRs, when using the modified LEAP and the original LEAP at different learning steps as well as the TCF-and EVD-based methods. Both tests use 1000 Monte Carlo simulations. Figure 7 shows the relationship between the number of iterative steps required by the modified LEAP and the original LEAP to make the network converge at different learning steps as the SNR changes. Figure 8 shows the relationship between the correct estimation probability of the PN sequence and SNRs, when using the modified LEAP and the original LEAP at different learning steps as well as the TCF-and EVD-based methods. Both tests use 1000 Monte Carlo simulations. It can be seen from Figures 7 and 8 that when the learning step is set to 0.01 and 0.05 in the original LEAP, the network can stably converge to the optimum point due to the small step size, but the number of iterations required for network convergence is relatively large, which is similar to the modified LEAP when 0.5, 1. When the learning step is set to 0.5 in the original LEAP, although the large learning step increases the convergence speed of the network, the correct estimation probability of the PN sequence is seriously reduced, because the network cannot converge to the optimum point. When the learning step of the original LEAP is set to 0.1, the original LEAP can not only converge rapidly, but also get good estimation performance, which is similar to the modified LEAP when 0.9, 1 or 0.9, 2 . However, in practical applications, it is difficult to choose a suitable learning step when using the original LEAP, but in the modified LEAP, you just need to set the weight coefficients and slightly larger, then the network will first approach the optimum point with a large step size and get to the optimum point with a small step size, which can not only make the network converge rapidly, but also obtain a high correct estimation probability of the PN sequence of the DSSS signal. Moreover, according to Figure 8, it is obvious that when the LEAP-based methods can converge correctly, their performance is comparable to that of the EVD-based method and is superior to that of the TCF-based method. The reason is twofold. First, since the LEAP neural network is actually a principal component analysis network, the principle of the LEAP-based methods and the EVD-based method is the same, which means that their performance is comparable. Second, when using the TCF-based method for PN sequence estimation, It can be seen from Figures 7 and 8 that when the learning step is set to β = 0.01 and β = 0.05 in the original LEAP, the network can stably converge to the optimum point due to the small step size, but the number of iterations required for network convergence is relatively large, which is similar to the modified LEAP when α = 0.5, γ = 1. When the learning step is set to β = 0.5 in the original LEAP, although the large learning step increases the convergence speed of the network, the correct estimation probability of the PN sequence is seriously reduced, because the network cannot converge to the optimum point. When the learning step of the original LEAP is set to β = 0.1, the original LEAP can not only converge rapidly, but also get good estimation performance, which is similar to the modified LEAP when α = 0.9, γ = 1 or α = 0.9, γ = 2. However, in practical applications, it is difficult to choose a suitable learning step when using the original LEAP, but in the modified LEAP, you just need to set the weight coefficients α and γ slightly larger, then the network will first approach the optimum point with a large step size and get to the optimum point with a small step size, which can not only make the network converge rapidly, but also obtain a high correct estimation probability of the PN sequence of the DSSS signal. Moreover, according to Figure 8, it is obvious that when the LEAP-based methods can converge correctly, their performance is comparable to that of the EVD-based method and is superior to that of the TCF-based method. The reason is twofold. First, since the LEAP neural network is actually a principal component analysis network, the principle of the LEAP-based methods and the EVD-based method is the same, which means that their performance is comparable. Second, when using the TCF-based method for PN sequence estimation, the peaks of the TCF need to be accurately searched [8], which is difficult to achieve in low SNR environment. Therefore, the TCF-based method has poor performance when the SNR is low. Conclusions A blind PN sequence estimation method of the DSSS signal using a modified LEAP is proposed in this paper. Compared to the original LEAP, the modified LEAP makes it easier to set the suitable learning step size to obtain good convergence performance. When the modified LEAP is converging, the PN sequence of the DSSS signal can be estimated by the connection weights of the network. These weights are the eigenvectors of the autocorrelation matrix of the DSSS signal. Because of the phase ambiguity of the eigenvectors, a novel approach which is based on the characteristics of the PN sequence of the DSSS signal is also proposed here to exclude the wrong estimated PN sequences. As shown in the simulations, the proposed methods mentioned above can quickly estimate the PN sequence of the DSSS signal in a low SNR environment.
6,875.4
2019-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Rationality and Rhetoric in the Corporate World: The Corporate Annual Report as an Aristotelian Genre This paper is part of a research programme into corporate annual reports. Reports do provide the information on the past performance, present state and future prospects which investors in listed companies require for the rational choices attributed to them. They also reveal the companies' responsiveness to the publics comprising the civil societies in which they are embedded. This effect requires more than strict rationality. To use Simon's distinction, the reports then entail both substantive and procedural rationalities. We argue that classical rhetoric and its recovery in the 'new rhetoric' yield useful approaches to the latter, and that annual reports comprise a genre in the rhetorical sense. We illustrate our case through generic features in the reports of the Australian-based multinational, Amcor. We suggest for future research that accounts of corporate functioning are incomplete unless they include the pre-structured interaction between companies and their publics which we have shown here through rhetoric. Introduction We aim in this paper to add to the conceptual apparatus in the study of corporate annual reports. These have long been a focus in nancial, organisational and managerial analysis, and it is widely agreed that they entail more than the 'information' required for investors either to make the rationally calculated and utility-maximising choices attributed to them under the model of 'economic man' or to exercise the rational control formalised in stock-market listing rules. There is no more agreement here, however, on how to allow for arational effects than there is in the study of the bounded rationality of corporate functioning more generally. Simon's call for attention to the issue remains very much to the point: 'Reasonable men' reach 'reasonable' conclusions in circumstances where they have no prospect of applying classical models of substantive rationality. We know only imperfectly how they do it. We know even less whether the procedures they use in place of the inapplicable models have any merit-although most of us would choose them in preference to drawing lots. The study of procedural rationality in circumstances where attention is scarce, where problems are immensely complex, and where crucial information is absent presents a host of challenging and fundamental research problems to anyone who is interested in the rational allocation of scarce resources. 1 We add to the study of procedural rationality by showing that annual reports ful l the functions of a 'genre'. We take our sense of 'genre' from classical rhetoric and from its revival in the 'new rhetoric'. 2 Since the classical rhetoricians had debated how a rhetor could allow for the entanglement of formal rationality in the legal, political and ethico-moral issues in civil society, their problem was similar to that facing the writers of annual reports. Since exponents of the 'new rhetoric' have stressed the importance of the audience in any rhetorical encounter, they suggest attention to how the reports are read. From either side, the concept of 'genre' subsumes the interaction given in the writing and reading of annual reports We have a trans-disciplinary aim in adding to analyses of the rhetoric of business activity and analysis. 3 The 'new rhetoric' was a multidisciplinary event, and the range of disciplines where annual reports are studied is just as wide. They are the focus of a critical tradition in accounting. 4 They have been studied in administration, 5 in management, 6 in organisational theory, 7 and in environmental economics. 8 They have been a vehicle for advice in nancial analysis, 9 and in public relations. 10 Our account here is intended to be applicable across all of these disciplines. It is a problem-centred and theory-building study which requires no speci cally disciplinary foundation. Between this introduction and a brief conclusion, we develop the paper in three sections. In the rst we give the rhetorical grounding of 'genre', and show its relevance to annual reports. To illustrate our argument, we then introduce the Australian-based multinational, Amcor, and show that elements in its reports from 1970 to the present can be matched with the forensic, deliberative and epideictic genres of classical rhetoric; the case of inherent con ict between capital and labour shows how Amcor resolves procedurally what is substantively irresolvable. We return to theory in the third section, to discuss the implications of a more general reading of the reports as rhetorical. Classical Rhetoric and the Three Genres 'Rhetoric' is most familiar now in the pejorative sense of 'mere rhetoric', as the puff of vainglorious politicians. That usage suggests that the rational can be taken for granted, and that the factual is distinct from the way it is presented. In contrast, classical rhetoricians treated the rational and the factual as always arguable, as embedded in civic and cultural processes, and thus as never more than practically, consensually and provisionally closed. Rhetoric in this sense is immediately consonant with Simon's 'procedural rationality'. One key issue in classical rhetoric was the arguability of contrary positions. Sophists such as Protagoras held that both sides of a dispute might be valid, that any strictly logical attempt to resolve them led to paradox, and that rationality then always entailed practical rationalisation. Plato famously attacked those claims in his defence of dialectic certainty. Holding that rhetoric required appeals to prejudice, he called it a panderer's knack, unworthy of the name of 'art' used by its apologists: 'it has no rational account to give of the nature of the various things which it offers. I refuse to give the title of art to anything irrational'. 11 Aristotle synthesised those themes. He favoured the Sophists in opening his tendentiously titled Art of Rhetoric, where he held that rhetoric was not inferior to but rather was 'the counterpart of dialectic', 12 and he accepted the claim that contrary positions could be argued. But he also allowed for Plato's critique by de ning rhetoric analytically: 'its function is not persuasion. It is rather the detection of the persuasive aspects of each matter'. 13 He thus treated rhetoric as a means of sustaining a focus on tensions between the rational and the arational. These were evident in the interaction between rhetor and audience, and this in turn was situationally variable. Aristotle discussed this effect under 'genre'. When he adapted a typology of forensic, deliberative and epideictic/celebratory genres, he held that these three were logically necessary: the listener must be either a spectator or a judge, and, if a judge, one either of the past or the future. The judge, then, about the future, is the assembly member, the judge about the past is the juror, and the assessor of capacity is the spectator, so that there must needs be three types of rhetorical speech … 14 Forensic discourse was, narrowly, the oratory of the court, and more broadly any attack on or defence of particular actions. As a dissection of the past, it was always a vehicle for either the xing of blame or the allocating of praise. Deliberative speech was concerned speci cally with politics, and was applied more generally to any attempt to affect the course of affairs. Directed to the future, it typically involved a choice either to do or not to do something. Epideictic discourse referred to ceremonial or occasioned speeches, with the orator more intent on pleasing or inspiring an audience than on persuading it. Rather than being a matter of debate, the subject of an epideictic discourse allowed the crystallisation of consensus in the here and now, as in the enacting of national unity through speeches in honour of soldiers killed in action. Unlike the critically disruptive thrust in forensic and deliberative speeches, the epideictic was then conservative. It was practised by those who 'defend the traditional and accepted values … not the new and revolutionary values which stir up controversy and polemics'. 15 Given the intertwining of past, future and present in any event, this taxonomy of genres is more ideal-typical than descriptive. In practice, any discourse must be a combination of the three, with one or another genre emphasised at different points. Allowing for the expectations of various audiences under various circumstances, the taxonomy both describes and enacts the entanglement of the legal, the political/ economic and the ethical/social which was characteristic of civil society in Aristotle's time. Writers in the 'new rhetoric' have made some use of it, as when Gross held that a scienti c report 'is forensic because it reconstructs past science in a way most likely to support its claims; it is deliberative because it intends to direct future research; it is epideictic because it is a celebration of appropriate methods'. 16 More typically, however, they have de ned 'genre' in such general terms as 'typi ed rhetorical actions based in recurrent situations', 17 or as 'ready solutions to similar appearing problems'. 18 Once such recurrent situations and problems have been identi ed, any generic text might be studied 'much as an anthropologist sees a material artefact from an ancient civilization, as a product that has particular functions, that ts into a system of functions and other artefacts'. 19 Corporate annual reports comprise a genre in both those senses: they are discourses directed to the past, future and present of corporate activity; they are addressed to recurrent problems; and they are elements in the system of corporate functions. The typology of forensic, deliberative and epideictic genres then yields a practical rst step towards locating the effects of the reports in both that system and the more general functioning of civil society. Trends in the analysis of annual reports suggest the usefulness of treating them as generic. We have already noted that they are studied in a variety of disciplines, and analysts certainly have reason to focus on them. Since they are 'the most publicized and visible document[s] produced by publicly owned companies', 20 since they 'communicate implicit beliefs about the organization and its relationships with the surrounding world', 21 and since they 'have the advantage of unobtrusive measurement in that they are written for purposes and to audiences different from [academic] analysts', 22 they give unique access to an organisation's embeddedness in civic society. Analysts seem to have resisted treating the functioning of this embeddedness as generic. In discussing how annual reports might contribute to a 'corporate brand', for example, Ind criticised 'a tendency towards sameness in the tone of the reporting. There appears to be an innate conservatism of approach and a blandness to many reports'. 23 To some extent this is statutory. Since much of what a report should include is either legislated or set in stock-market listing rules, it would be startling were there not a 'certain sameness' in a company's reviews of its achievements, of its projected growth and of its current standing. Ind tacitly granted that when he held that reports 'should be delivering three core messages: historical performance, an insight into the company's future and an indication of management capability'. 24 Despite his explicit critique of 'conservatism' and his implicit denial of the generic, then, he still evoked the conservative effects of genre, for his 'core messages' entail forensic, deliberative and epideictic effects. That speci c echo resonates too with the more general sense of genre as a cluster of typi ed responses to similar problems in recurrent situations, for the management of any company faces the annual problem of simultaneously satisfying contrary expectations. Their potential audience is obviously composite. While they ostensibly address their reports to shareholders, managers must allow for them being read by competitors, consumers, suppliers, regulators, pressure groups, the press, the market, trade union of cials, and present and future employees. The demands of each of these elements in the audience are often incompatible. That occurs, for example, in claims made for regulation and deregulation, in the tensions between different divisions within corporate networks, in disputes between workers and environmentalists, and in the con ict between capital and labour. Given those constraints, management faces a dif cult task in deriving the necessary appearance of unity. Their reports require a practical resolution of dif culties which, so far at least, have been found rationally irresolvable. The annual reports then meet the conditions which Aristotle and the Sophists ascribed to rhetorical situations in general: the need to crystallise consensus when opposed positions are both tenable. So it makes good sense to treat the reports as generic. We now illustrate that argument by showing forensic, deliberative and epideictic effects in the annual reports of one company, Amcor, from 1970 to 1999. Generic Effects in Amcor's Annual Reports Amcor is well suited to a case study of allowance for contrary demands, for it is both typical and atypical of how Australian companies have responded to the shifts in global capital which have had a marked effect on Australian political economy. Throughout much of the twentieth century Australian industry was highly protected, the Australian workforce was highly unionised, and the Australian electorate was highly polarised. After peaking in the election of a (union-based) Labor government in 1972, after 23 years of (liberal-conservative) coalition rule, and in its dismissal from of ce three years later, con ict continued during the eight years of the coalition government rst elected in 1975, a period also marked by the economic stagnation which followed the world economic crisis of 1973. 25 An economic summit called in 1983 by the newly elected Labor government proved to be a watershed, for the corporatism set there allowed the rst steps towards the deregulation which has remained a feature of the Australian economy. The process was intensi ed on the re-election of a coalition government in 1996. Under the name of A.P.M.-it was rebadged in 1986 as one mark of its increasingly global focus-Amcor was among Australia's largest listed companies in 1970, and has remained so amid the oating of the dollar, reduction of tariffs, large-scale privatisations and deregulation of the labour market which have transformed Australia. It has expanded from its base as the domestic market-leader in forestry, paper-making and packaging to become internationally signi cant in those industries. With levels of private share-ownership in Australia now among the world's highest, the number of its shareholders has shown a characteristic growth, more than tripling from 1970 to 1999. Given the overall concentration of capital in Australia, these owners are characteristically distributed. In the 1999 report, the 92.2% of them with 5,000 shares or less were shown as holding only 20.6% of issued shares. Conversely, the top 20 shareholders-all nancial institutions-held just over 50%. The company is just as interesting where it is atypical, for it straddles the recent shifts in civil society. It operates in the 'old economy' of manufacturing, and is well regarded for it. At the same time, it is susceptible to the 'new politics' of social movements, because of the direct and often unsightly exploitation of the environment which its industries require. The 'old economy' means that old con icts such as those between capital and labour remain in play; the 'new politics' adds new elements to the already diverse audience facing any large company. So Amcor is well suited to our purposes. It is typical of major patterns in the Australian economy (itself typical of industrial economies world-wide) and its businesses require both a general responsiveness and a speci c resolution of the con icts typical of manufacturing industries. If our argument is justi ed, then forensic, deliberative and epideictic effects should be evident in the company's formally rational reports. So we need to show, rst, that Amcor does present a rational appearance, and then that this entails a forensic allocation of blame and praise for past results, a deliberative weighing of alternative futures, an epideictic celebration of present success, and the fusion of contrary elements in each of those in a presentation of consensual unity. As a speci c focus, we concentrate on how Amcor achieves consensus despite the con ict between capital and labour. Amcor's Rationality If Amcor's reports were taken at face value it would seem that the company is always under strictly rational control. It is obliged to present its accounts as mathematically rational, and it similarly uses the language of rationality in the more discursive components of the reports, where the board and management interpret the numbers showing pro t or loss, growth or decline, and success or failure. They routinely couch their interpretations in terms of the 'strategy', 'ef ciency', 'productivity', 'rationalisation' and 'restructuring' familiar from accounts of 'economic man'. Further, it was evident throughout the 30 reports we studied that the more uncertain the general environment became, the more the company stressed its rational responsiveness. As the old certainties of the Australian economy were removed, and as the company became more and more engaged in global competition and thus more and more vulnerable to shifts in the global economy, Amcor intensi ed its self-presentation as rational. The company could obviously take this emphasis for granted, for it rarely made explicit the assumption of 'economic man' underpinning it. Only exceptional circumstances provoked direct statements, as when the board reacted in the 1976 report to what it saw as a politico-economic crisis in Australia by declaring that 'We advocate the principles of free enterprise where business and individuals can exercise initiative and gain rewards commensurate with their success'. That is an exception proving the rule, for in general Amcor took the grounds of substantive rationality as self-evident. The very language of that rationality, however, suggests the shakiness of its grounding. For example, once mathematically rational 'accounting' is translated into the narrative 'account' of an annual report it inevitably entails the nger-pointing of 'accountability'. Given that effect, the strictly rational must require the rhetorical counterpart which Aristotle suggested. We could use any issue to show that process. In focusing on the relation between capital and labour we are following Amcor's own lead, for as it typically said in 1996, 'Our people are the key to Amcor's past, present and future success'. That conventional division of time points also to the division of forensic, epideictic and deliberative genres. We turn rst to the forensic genre. The Forensic Genre Classically, the forensic genre referred rst to the diagnosis of past events and then to the attribution of praise or blame for them; in annual reports, these points are covered in the required historical review of performance. Other writers have found that companies typically claim credit for their successes but attribute the blame for failure to external events, 26 and Amcor certainly ts that pattern. The obituaries for former directors and the encomia for retiring directors and executives which feature regularly in the reports, for example, leave no doubt as to who should take the credit for the company's success. Here is one example, from the 1970 report: Your Directors record with regret the death of Sir Charles Booth, C.B.E., on 27th June, 1970. Sir Charles joined A.P.M. as a Director in 1944, He was appointed Managing Director in 1947 and served in that capacity until retirement from executive of ce at the end of 1958 when he was elected Chairman of Directors. He retired as Chairman in 1966 but continued as a Director of the Company until his death. It was under Sir Charles' leadership that A.P.M. embarked on its major expansion in the early post-war years. His faith in the Company and his vision were outstanding at all times. His long experience and wide knowledge of the industry contributed greatly to the prosperity of the Company and his advice and counsel will be sadly missed. The tone was identical when the 1996 report included a note on the retirement of Stan Wallis, after 19 years as Managing Director: Mr Wallis joined the company in 1960 and was appointed Deputy Managing Director in 1975. Since then he has made an outstanding contribution to Amcor's continued growth and development in Australia and its expansion offshore. On behalf of all shareholders we thank him for his strong and imaginative leadership and for the important role he has played in establishing Amcor as a world-ranked packaging and paper manufacturer … More generally, management almost always praises the company's workforce. Thus, 'The year's [1968][1969] good results were achieved through the work and co-operation of staff and employees'; 'The progress of the Company during the year [1978][1979] results, in large measure, from the contribution and enthusiastic commitment of those who work for A.P.M'; and 'The excellent results achieved in 1988/89 re ect the skill and dedication of employees throughout the company'. Amcor then presents success as a collective achievement, under the guidance of its management. It is quite otherwise with failure. Although Amcor experienced occasional setbacks, there is scant concession throughout the 30 reports that the board may have erred in setting policy, or management in executing it. Even in the 1990s, when the company underwent threatening reverses and when several major investments proved to be ill-chosen, there is scarcely any acknowledgement of board or managerial mistakes, and where there is, the point is tacit. The following passage from the 1999 report illustrates this effect: The improved results re ect the positive impact of major changes made throughout the company in the past few years. These have included changes in our organisational structure and senior management, rationalisation and restructuring of many of our businesses, the sale of a number of non-core or under-performing activities, a substantial expansion of our Australian ne papers business and a major and continuing cost reduction program. This might imply that the board had previously been dilatory in setting the organisational structure and that senior management had been unsatisfactory, but that is as far as the company goes. Rather, Amcor routinely evokes four causes for disappointing results: economic conditions; governmental regulation; the intensity of competition in its markets; and, of most interest here, the same labour whom it ritually praises for success. During the 1970s and into the 1980s, A.P.M. routinely blamed strikes and industrial con ict when performance was unsatisfactory. This is from the 1971 report: Industrial disputes in the rst half of the nancial year detrimentally affected deliveries of papers and paperboards. Increases in productivity have been more than offset by substantial increases in costs during the year, notably in wages and in many other costs outside the Company's control, principally materials and transport. These factors were the main causes for the fall in pro t earned for the year from the manufacture of paper and paperboards. Labour remained a cause for blame even under the corporatism of the 1980s. Thus the company noted in 1987 that: Negotiations between major packaging companies and the relevant unions regarding 'second tier' claims for additional remuneration bene ts have progressed satisfactorily except in Victoria where industrial disruption has caused major and unnecessary losses of production. Although the company welcomed the attention at the time to deregulation of the labour market, it was impatient with the pace of the process. Thus it held in 1990, as one cause of a disappointing pro t, that 'Industry also is hampered by Australia's inability to carry out effective microeconomic reform on the waterfront and in our industrial relations, transport and telecommunications systems'. Even without the explicit reference to labour, microeconomic reform in Australia has often entailed union-breaking. To judge only by Amcor's reports, that process was successful, for after the earlier routine attention to unions and industrial con ict, the issues disappeared in the mid-to-late 1990s. The company occasionally made explicit the tension in its blaming of sections among those whom it also praised, as when it wrote in 1976: In the earlier part of the nancial year our mills worked with little industrial disruption. However, in recent months there was an upsurge in industrial disputes. Over the full year the Company lost 40,000 tonnes of production through strikes and bans damaging both to A.P.M. and to our employees. Nevertheless, the majority of our staff and employees work hard and give loyalty and co-operation. Your Board places on record its appreciation of their contribution during another dif cult period. Amcor then resolved the contradiction between praise and blame by reserving the latter for a disruptive minority. If that conventional scapegoating shows the entanglement of the forensic and the epideictic, then both are further enmeshed with the deliberative, for Amcor presented the loyalty and co-operation it praised as outcomes of its policy. The Deliberative Genre In classical rhetoric, deliberative speech referred speci cally to politics and more generally to any policy-making in civic affairs. It was entwined with the other two genres in that it entailed a response to diagnosis of the past and an attempt at consensus in the present. Amcor, of course, adopted policies on labour. As it said in 1977, 'We recognise for A.P.M. to remain ef cient the Company must have a loyal and well trained workforce'. It routinely included labour relations among its goals. In 1981, for example, it noted its intention to 'remain a good employer concerned with the safety, work satisfaction and overall welfare of employees', and it listed among its key objectives in 1995 its aim to 'maintain our signi cant commitment to mutually-bene cial employee relations through safe working conditions, training programs and recognition of the productivity and potential of our employees'. The taken-for-granted version of the past given by 'retain' and 'maintain' is worth noting here, and so too is the moral patina given to the mixture of 'ef ciency' and 'loyalty' by Amcor's use of 'must'. Again, Amcor appears to take the assumptions behind that gloss as self-evident. There is a tone of restrained impatience when it invokes them explicitly, as when it stressed in 1972 that its: ability to provide good wages and working conditions is linked with cooperation by employees and Unions in achieving improved ef ciency and the elimination of unjusti ed strikes, and the acceptance of modernisation and appropriate crewing numbers. In keeping with the asymmetry between praise and blame, the reports never mention strikes which may have been justi ed; modernisation and its associated rationalisation appear to be their own justi cation. The company also applied that sense of obviousness on a larger scale. It claimed in 1983, for example, amid the emerging corporatism of the newly elected Labor government, that: The continuing expectation of increasing incomes and more extensive welfare and community services can be justi ed only if we are able to achieve real increases in national productivity. We are hopeful that the Federal Government, armed with the spirit of consensus developed at the Economic Summit, will continue to lead the community to a better understanding of these principles. Amcor reported a range of labour-related policies towards improvement in its own productivity. One set of policies concerned the 'communication' by which it tried to instil in its workforce a morally loaded version of its substantive rationality. Thus the company claimed in 1978 that it: recognises the importance of effective communications and our policy is to develop a better understanding between management, employees and unions to encourage: -an increasing concern for, and pride in, the progress of the Company; -responsible attitudes towards production ef ciency and industrial relations; -recognition that A.P.M.'s success depends upon co-operation at all levels and between the various functions in the Company. It said similarly in 1987: Considerable effort has been devoted to communicating to all A.P.M. employees our strategic plans and underlying objective of achieving international cost competitiveness. Our aim is to foster the acceptance and implementation of participative productivity improvement techniques. Local and company wide consultative processes have been established, with the support of relevant unions, to reassess training needs and rede ne job requirements, quality and reliability standards for import replacement and export development and attitudes to manufacturing activities. This action aims to recognise the often hidden potential that already exists in the organisation and to use these resources to achieve improved results. Amcor did more than talk about this 'communication'. It also gave material expression to its calls for better 'understanding', for 'mutually bene cial' labour relations and for 'participative productivity improvement techniques'. In the mid-1980s the company developed a policy for employees to buy shares at a discount. Giving the workforce 'a direct stake in the company's future growth and prosperity' (1988), and offering it an 'incentive to strive for improved results and to share in the bene ts of Amcor's success' (1989), the offer was widely taken up. At a peak in 1996, more than 14,000 of the company's 25,000 employees were listed as shareholders. Since introduction of the plan coincided with the declining salience of unions, the alignment of interests it represented appeared to mark the company's success in resolving tensions between capital and labour. That success, however, was not complete, and Amcor continued to urge policies to shift the remaining tensions in its favour. Thus it treated the high unemployment during the recession of the early 1990s as more an opportunity than a problem. It noted in 1991, for example, that: There will not be a better opportunity than during the current economic recession for management of Australian industry to reach a common understanding with the workforce on ways to reduce costs and improve ef ciency and productivity. Amcor has made impressive progress in the past few years in this regard, but if our businesses are to remain internationally competitive there must be reasonable incentive for us to continue to invest and expand in Australia. Since it is clear enough on whose terms the 'reasonable' would be decided, this is a tacit concession of continuing con ict. Although the reference to 'common understanding' then might have the edge of an offer which cannot be refused, the appeal to consensus also suggests the interweaving of the epideictic throughout this deliberative policy-making. The Epideictic Genre Referring to ritual discourse, the epideictic genre is immediately relevant to annual reports. Orators in that mode were less intent on persuading an audience to a particular position than on pleasing or inspiring it, as they crystallised consensus through the celebration of success or the commemoration of tragedy. Annual reports, of course, are typically devoted to the success required for corporate survival. Amcor's claim in 1996 that it 'has a strong record of growth and has consolidated its position as one of the world's leading packaging and paper companies' is a characteristic gesture in that direction. As already suggested in the forensic praise of employees and the deliberative policy towards a 'common understanding', inclusion of the workforce is crucial to this epideictic celebration. We noted too that the appeal to unity in the epideictic genre is conservative in its results, in contrast to the potentially radical thrust of diagnosis or policy-making. That effect is suggested in the 'our people' which Amcor commonly uses in the reports to describe its employees. The phrase evokes a feudal sense of belonging and of mutual obligation rather than the self-interested and calculating rationality pervading the forensic and deliberative moments of the reports. Although Amcor rarely refers explicitly to 'corporate culture', with all the arationality which that entails, it does implicitly invoke a sense of collectivity beyond that of 'economic man'. We have already suggested one means by which it does so, its use of scapegoating to resolve the tension between forensic praise and forensic blame of its workforce. That is a routine device in the reports. Here is another example, from the 1977 report: Our relationships with most unions are good; however, small groups both within A.P.M. and outside the Company are bent on disruption to the detriment of the workforce as a whole and some maintenance unions are using strikes and other work limitations to press their claims for pay increases and other bene ts outside the Government's indexation guidelines … Since the rhetorical antithesis which the company uses here always suggests the polarity of good and evil, its identi cation of a 'them' also identi es a uni ed and right-thinking 'us'. Isolation of a small minority allows the celebration of such putatively shared values as the 'commitment', 'dedication', 'enthusiasm', 'loyalty' and 'team-work' which Amcor regularly invokes. It reinforces that effect by a means which will be evident in the passages we have quoted, its routine use of the rst person plural. When the chairman, the managing director or divisional managers use 'we', 'us' and 'our', they sometimes have a speci c referent in the board or in groups within the company. But rst person plural pronouns have the advantage of imprecision, and their use allows more to be implied than stated. Phrases like 'our team of highly skilled people' (1981), 'our most important resource' (1989) or 'our management and employees' (1992), for example, gloss the differences between owners, management and labour. They also invoke the readers' lived senses of collectivity beyond the company. Here is a not atypical example, an extract from the 1983 report which we have used above: The continuing expectation of increasing incomes and more extensive welfare and community services can be justi ed only if we are able to achieve real increases in national productivity. We are hopeful that the Federal Government, armed with the spirit of consensus developed at the Economic Summit, will continue to lead the community to a better understanding of these principles. The rst 'we' denotes 'we Australians'; the second refers narrowly to 'we, the board'. By eliciting the broad sense of collectivity and then by identifying itself with it, the board naturalises what it can expect from its workforce. To derive unity from disunity, that is, it transmutes the rational to what is obviously reasonable on the basis of national traditions. Annual Reports as Generic We have shown that Amcor uses its annual reports to communicate a forensic allocation of blame and praise for past results, a deliberative planning for the future, an epideictic celebration of present success, and the glossing of contrary elements in each of these in an appearance of consensual unity. Since those effects are evident in the one issue of the relation of capital and labour, we are con dent that they are more generally applicable. But we should also note three limits in our treatment of annual reports as generic. First of all, the forensic, deliberative and epideictic genres of classical rhetoric give no more than a convenient framework. They are separable analytically, but only so long as the point of analysis is the interaction between their effects. That is why we have stressed throughout our discussion that the genres are mutually entwined. Secondly, the apparent consensus which follows from the practical resolution of con ict in that entanglement is never more than provisional. Thus the appearance of unity which Amcor derived from its 'participative productivity improvement techniques' and from its aim of 'mutually bene cial employee relations' was vulnerable, as the 1998 report indicates: substantial abnormal losses were incurred during the year, mainly re ecting costs of plant closures and rationalisations of poorly performing businesses. Regrettably, this has involved job losses, but as a result Amcor is now more ef cient and competitive than ever before … The uni ed and arational 'dedication', 'loyalty' and 'commitment' which the company claimed to value were clearly more negotiable than it allowed. It implicitly expected retrenched employee-shareholders to welcome as owners what they might have found devastating as workers. Many of Amcor's employees faced those con icting rationalities, as shown in Table 1. We cannot tell how many of the 6,000 employees who left the share register were among the 5,500 retrenched in the cause of ef ciency, but it is fair to assume that the two groups had many common members. It is equally striking that 8,000 remained as shareholders despite this demonstration of how precarious their positions were. Our third limitation, then, concerns the need to study annual reports in conjunction with studies of the meanings of share-ownership, for Amcor's remaining employee-shareholders have clearly made sense of their situation. Although we have shown the generic constraints which the company faces, and although we have shown some of the rhetorical moves it makes in allowing for them, we have relied only on our own reading for our account. These limitations are important, but they do not lessen the signi cance of our study. Our demonstration of forensic, deliberative and epideictic moments in Amcor's reports, and of its generic allowance for contrary demands, warrants a more general reading of annual reports as rhetorical. We turn now to some theoretical implications of that approach. Rhetoric and Annual Reports The rhetorical working of corporate communication has implications well beyond the scope of this paper. We touch here on only two issues: the long-standing tensions between formal and informal rationalities; and the 'control' associated with rationality in much organisational analysis. To develop those points requires that we return brie y to classical rhetoric, and to the 'new rhetoric' of its adaptation. As we stressed when we introduced classical rhetoric, disputes over rationality have a long history, and our generic study of annual reports is in that sense nothing new. Rhetoric might even have originated in the study of business operations, for tradition has it that although it had long been taught orally it was rst formalised by Corax and Tisias amid the political upheaval and property disputes following the death of Hiero of Syracuse in 466 BC. 27 A legend about them highlights the tension between formal and rhetorical rationalities which Plato attacked in his disputes with the Sophists and which is still a feature of rationality in practice. Impressed by Corax's success in the assembly and courts of the new republic, Tisias approached him for tuition. The two agreed that Tisias would pay for his instruction once he had proved its worth by winning his rst case. As soon as the lessons were completed, Corax sued for payment, arguing that: 'If I win, I win; if I lose, I also win, by the terms of the contract'. But Tisias responded: 'If I win, I win; and if I lose, then I too also win, by the terms of the contract'. The judges took the only sensible decision, and drove them both from the court. The story may be apocryphal, but its point remains. There is a permanent tension between apodeictic appeals to reason and epideictic appeals to consensus in formal and informal rationalities. That tension might almost de ne the social sciences. Some disciplines and subdisciplines are based on the assumption of a rationally calculating and utility-maximising 'economic man', and others on the observation that the analytical bene ts deriving from that assumption entail an oversimpli cation of rationality in practice. Simon's call for attention to 'procedural rationality' in situations where attention is scarce, where problems are complicated, and where information is lacking is but one among many attempts to restore that pre-empted complexity. Even if analysts who have allowed for arationality have often met the fate of Corax and Tisias, the issues raised in classical rhetoric remain permanently relevant in the study of economic activity. The 'new rhetoric' on which we have also drawn, however, is not just a return to classical themes, for the interaction between speakers/writers and their audiences stressed in it entails a crucial shift of emphasis. Where the rhetor was the focus of classical rhetoric, the new rhetoric is de ned by attention to listeners or readers. When incompatible statements are equally reasonable, 'the appeal to reason must be identi ed not as an appeal to a single truth but instead as an appeal for the adherence of an audience'. 28 Unless an attempt to persuade is adapted to its listeners' or readers' expectations, it is like the question-begging of formal logic, in that the rhetor would presume the agreement at issue. It follows that 'the image of the powerful orator playing masterfully with the emotions of the helpless crowd is a myth. … [I]f orators can control crowds, it is only because crowds control orators'. 29 That stress on the audience ts the new rhetoric to the age of mass democracy, mass communication, and mass involvement in corporate activity. If the rising levels of private share-ownership throughout the post-industrialising world do mark a shift from 'managerial capitalism' to 'investor capitalism', 30 then it becomes more pressing than ever to study the effect on corporate operations of investors' expectations. The new rhetoric yields a new approach to the old tension between ownership and control, for when a company ful ls forensic, deliberative and epideictic expectations it is yielding to a form of control from below. There is then a certain 'metaphysical pathos' in rationalised accounts of rationalised managerial control. 31 Shareholders are the ostensible audience for annual reports and since we have shown generic effect in the reports we can infer the constraints imposed on rms by what they expect those shareholders expect. Even if the t between what a company does and what it says it does is only rough-and most analysts of annual reports agree that it is generally better than that-it is still moot as to who is controlling whom. Just as orators can control crowds only because crowds control orators, we suggest that a company controls its shareholders only if its shareholders control it. Thus the blends of substantive ef ciency and procedural loyalty which we found in Amcor's reports are at the same time constrained by readers' expectations of logical and ethical consistency and enabled by those readers' more general experience of and allowance for inconsistency. Amcor's surviving employee-shareholders show that effect clearly enough. To over-stress the rational is to miss that generic interaction. That effect should not be over-stressed. Managers do occupy strategic positions, and not all shareholders/shareholdings are equal. Amcor's 20 largest shareholders own more than 50% of its shares, and it cannot be doubted that they disproportionately exercise substantive control. But even when that imbalance is granted, procedural/rhetorical control still remains in play. Since the block shareholders are nancial institutions which in turn must satisfy their own shareholders and investors, the effects of interaction are simply shifted one step. Since shareholders are not the only audience for annual reports, those effects are more general still. Any company must also take into account the often incompatible demands made by its potential readers. As Amcor's deft allowance for the con ict between capital and labour shows, what is substantively impossible can be resolved procedurally/rhetorically, within the constraints of a fuzzy control from below. Classical rhetoric is then a useful supplement to analyses in which rationality is taken as unproblematic. Such a supplement is permanently necessary, for although 'economic man' has been queried ever since he made his rationally calculating and utility-maximising debut, he still strides through much of the literature. The emphasis on the audience in the new rhetoric gives an extra edge to that approach, in re-opening access to the issue of 'control'. Those are among the bene ts which follow from treating corporate annual reports as a genre. Conclusion We began this study of annual reports with Simon's remark on the under-studied problem of how 'reasonable men' reach 'reasonable' conclusions when they have no prospect of applying classical models of substantive rationality. We have argued that both classical rhetoric and the 'new rhetoric' are useful approaches to that dif culty, and have illustrated their potential by focusing on the narrow question of how one company, Amcor, reconciles rational ef ciency with its stated aim of fostering a loyal and committed workforce. The generic effects which we found in Amcor's reports showed that Aristotle's stress on the epideictic as the counterpart of the apodeictic yields an interactive approach to substantive and procedural rationalities: substantive rationality is a necessary but not suf cient condition of reasonableness; the rationality of economic man is among the arational values enacted procedurally. For a company to present a reasonable responsiveness to its shareholders then requires that it draw on both forms of rationality. Since we have argued for Amcor's success in that regard on the basis of its nancial success we need to add another note of caution here. Amcor used a rising share price in 1999 as evidence of its success in restructuring, but its share price is languishing as we write this, for the company has been neglected amid the booming demand for shares in the new technology companies. It is too early to assess claims that the old economy has passed; Amcor's successful divestment of its paper-making activities, which also occurred as we were writing this article, suggests it is alive and well. But we are con dent that however Amcor balances forensic diagnosis, deliberative policy-making and epideictic celebration in its response to that falling share price and that divestment in its annual report for 2000, the account will be as plausible as the pride it took at the market's response to it in 1999. That is generically required, and Amcor has proven adept at meeting demands which are simultaneously defensive, informational and ceremonial. When attention is scarce, when problems are complex, and when information is absent, both rms and their shareholders make what sense they can of their uncertain embedding in processes beyond their control and of their uncertain relation with each other. They do so by drawing on what can be taken for granted and by attending only selectively to contradictions in that taken-for-grantedness. When embeddedness entails legal, politico-economic and socio-ethical elements, the rhetoric developed to account for their intertwining in civil society is a promising, and perhaps necessary, focus for research. In later papers we aim to bring that focus to such issues raised in annual reports as corporate governance, corporate reputation, corporate social and environmental responsibility, and corporate strategy. The over-rationalised variants on stakeholder theory and agency theory commonly used in these elds require supplementing with the attention to the arational which we have treated here through 'genre'.
10,672.2
2000-09-01T00:00:00.000
[ "Philosophy", "Business" ]
Designing interpretation tracks for nature tourism in Tahura Gunung Menumbing, West Bangka Taman Hutan Raya (Tahura) Gunung Menumbing is a well-known protected area in West Bangka, Indonesia, and a popular heritage site for its old historical house located at the top of the mountain that was used by the Dutch to isolate the founding fathers of the Republic Indonesia during the war era. Although it has been a popular tourism site, many potential attractions in Tahura Gunung Menumbung are still unexplored. This research aimed to design interpretation tracks for nature tourism in Tahura Gunung Menumbing to increase tourism destination attractiveness. To achieve the research purposes, this research used the combination of field surveys, literature reviews, and interviews. We followed the procedure from the Bureau Land of Management to score the landscape attractiveness. It was found 142 plant species, 61 animal species, and 12 landscape points of interests that were potential to be the interpretation objects. We identified 10 interpretation tracks varied from 160 to 4,200 meter in length and contained 2 – 8 interpretation objects. Six interpretation programs are then proposed, such as Menumbing Jungle Tracks, Tin Mining Explorations, Primates of Menumbing, Snakes to Explore, Menumbing-Belt Adventure, and Menumbing’s landscape and socio-culture. Introduction Taman Hutan Raya (Tahura) Gunung Menumbing is a well-known protected area in West Bangka Regency, Indonesia because of the unique plants and animals in it represent the lowland forest ecosystem of Bangka-Belitung Island. Besides having the richness in biodiversity, Tahura Gunung Menumbing (TGM) is also a well-known heritage site for its old historical house locally called pesanggrahan, located at the top of the mountain (± 450 meters above sea level) that was used by the Dutch to exile the founding fathers of the Republic Indonesia such as Ir. Soekarno, Mohammad Hatta, among others, during the war era. Moreover, the landscape in the surrounding of the pesanggrahan is beautiful, where Bangka Strait can be clearly seen along with the Muntok city sceneries. For this reason, TGM has attracted domestic as well as international tourists and become one of the popular tourist destinations in Muntok and thus significantly contributed to the Local Own-Source Revenue (PAD) of West Bangka District. Even though it has become a popular tourist destination, there are still many potential tourist objects in TGM that have not been explored and optimally managed. According to the West Bangka Regency plan, TGM will be developed as an ecotourism area. In order to develop ecotourism in TGM, nature interpretation activities are needed to gain benefits for both conservation and community in the surrounding area. Through nature interpretation, environmental education can be delivered to educate tourists to respect and appreciate nature and the environment. Nature interpretation uses the interpretation track that will bring tourists to a new experience in understanding nature. The interpretation track will connect several points of interest that contain unique objects and attractions with conservation and environmental messages. The interpretation tracks contain interpretation programs that deliver messages about the natural phenomena, historical values, and geological values, etc. to visitors [1]. This research aims to design nature interpretation tracks in TGM to increase tourism destination attractiveness. We followed guideline from the Bureau Land of Management to score the landscape attractiveness and then developed interpretation programs. The results of this research will be useful for the authorities such as West Bangka Tourism Office and West Bangka Environmental Office to improve both the economy and ecology of TGM. Moreover, through this nature interpretation program, illegal forest encroachment activities in TGM can potentially be minimized due to the provision of alternative income to the community. Study area Tahura Gunung Menumbing (±3.333,20 Ha) is administratively located in Muntok sub-district, West Bangka Regency, Bangka Belitung Islands. Geographically, Tahura Gunung Menumbing is located between 105°09'29''-105°14'34'' East Longitude and between 1°59'26''-2°02'29'' North Latitude. The topographical conditions range from flat to very steep slope, with the highest peak reach 450 meters above sea level (m.a.s.l.). TGM has A-type climate according to Schmidt-Ferguson climate classification indicating a very wet condition throughout the year with monthly rainfall variations between 0.8 (dry months) to 311.0 mm (wet months). The lowest rainfall occurs in September, while the highest rainfall occurs in January. The average air temperatures ranging between 23.5°-26.5°C, and the air humidity ranges from 57-97%. Methods In order to design the interpretation tracks, data were collected by field observations, literature studies, and interviews. These data included natural resources such as flora, fauna, and landscape, and the local culture inside TGM. The potential objects of interpretation were determined based on the types of flora, fauna, and landscape whether they are interesting, rare, and unique that was found along the observation track. Next, for each object, we identified the morphological characteristics and potential attractions. Specific to landscape attractiveness, we used the guideline from the Bureau of Land Management which the assessment of potential landscape was based on landscape elements such as landscape form, [2]. For the local culture attractiveness, the data were collected by interviewing the TGM manager and the surrounding community. All the potential objects of interpretation were marked by using GPS (Global Positioning System). Along with the interpretation objects, the interpretation tracks were designed based on the following criteria: the short way to the spectacular objects, existed walking pathways, avoid sensitive plant communities and wildlife habitats, avoid a straight pathway and considering the total time durations [3]. Data analysis We used a descriptive analysis based on the literature reviews and interviews with the local guides to define all the potential interpretation objects both natural objects and cultural objects. The spatial analysis was carried out using ArcGIS v.10.4 software to create the interpretation track and locate the point of interest. Then, we visualized the interpretation tracks together with its geographic position, topographic conditions, and various other information needed to support the interpretation programs. Potential nature and culture interpretation objects in the Tahura Gunung Menumbing We identified floras as many as 142 species of trees, shrubs, palms, orchids and herbs inside the TGM. From this number of floras, a total of 21 species were used as objects of interpretation, including Chalophyllum pulcherimum, Palaquium rostatum, Eurycoma longifolia, Syzygium zeylanicum, Arenga pinnata, Calamus rotang, Dillenia suffruticosa, Mangifera caesia, Calamus manan, Melaleuca leucadendron, Handroanthus chrysotrichus, Ficus exasperata, Ficus annulata, Ficus rumphii, Ficus variegata, Dendrobium leonis, Hevea brasiliensis, Acacia mangium, Pithecellobium jiringa, Parkia speciosa, and Aeschynanthus pulcher. Species of ficus were found scattered along the observation track and were interesting because it has small fruit and grown scattered in the stem (lateralis). In addition, we found an orchid namely Dendrobium leonis that grow well on rocks. Moreover, we found Palaquium rostatum that is the identity flora of Bangka Belitung Island. Besides floras, we found faunas as many as 16 species of mammals, 30 species of birds, and 15 species of herpetofauna. A total of 19 types were used as objects of interpretation, namely Tarsius [5]. These two species are well-known as the fauna identity of Bangka Belitung Island. Based on the field observations, we identified at least 12 points of interest that have beautiful landscape sceneries in TGM. Using guideline from the Bureau of Land Management, it was identified that four points of interest have had a medium landscape quality level and eight points have had a high landscape quality level. Four points that have had medium quality such as Post 1 Menumbing, Goa Jepang, Kelekak, and Tahura border, whereas those that have had the high quality were gazebo 3, Pesanggrahan Menumbing, Menumbing slope, TVRI Tower, Menumbing water source, Watervank water source, Argotirta water source, and illegal ex-mining. From these 12 points of interest, Argotirta water source is found to have the highest value since this point has the views of green hills and abundant storage of water. For the culture potential objects in TGM, the Pesanggrahan Menumbing is the main attraction in TGM where we can see an old historical building, a place of exile for 8 national figures including the founding fathers of the Republic of Indonesia, the first president and vice president, Ir Soekarno and Moh. Hatta during the colonial period. Pesanggrahan Menumbing had also been designated as a cultural heritage in Muntok, West Bangka. Area boundary interpretation track had 870 meters away and 30 minutes away by foot. The condition of the track was in the form of a footpath that passed through a rubber plantation that could be passed only by foot, however on the side approaching the border of the area marked with cassava vegetation, motorized vehicles can pass it. This track had the object of interpretation of Hevea brasiliensis, Parkia speciosa, Phaenicophaeus curvirostris, Caprimulgus affinis, kelekak landscapes, and Tahura borders. Discussion The results show that ten interpretation tracks designed then six interpretation program themes were proposed, namely Menumbing Jungle Tracks, Tin Mining Explorations, Primates of Menumbing, Snakes to Explore, Menumbing Belt Adventure, and Menumbing's landscape and socio-culture. The theme of Menumbing Jungle Tracks was a program that aims to introduce Menumbing's nature resources including flora, fauna and landscapes. This theme was used on post 2 -Menumbing water source track, post 2 -TVRI tower track, post 1 -Watervank water source track, out track Menumbing, and Menumbing circle track. The theme of Tin Mining Explorations used on post 1 -Argotirto water source track aims to introduce forest ecosystems affected by illegal mining and the nature resources were still there. The theme of Primates of Menumbing used in the Menumbing post 1-at the top track aims to introduce primates that could be found in the Tahura Gunung Menumbing (TGM). Topics explored from this theme were the characteristics, behavior and distribution of these primates. The theme of the Snakes to Explore located in the top -Menumbing water source track aims to introduce the snakes around the Menumbing guesthouse that they could identify the types of snakes, their habitats, and their handling. The theme of Menumbing Belt Adventure used in the border line of this area to introduce tahura area and the surrounding vegetation. Finally, the theme of Menumbing's landscape and socio-culture was used in the post 1 -Pavilion 1 track, which aims to introduce history in the area as well as stories that live in the community and points that had interesting natural scenery. The six themes form a core that
2,378.6
2021-01-09T00:00:00.000
[ "Geology" ]
Graphene-Based Multiband Chiral Metamaterial Absorbers Comprised of Square Split-Ring Resonator Arrays With Different Numbers of Gaps, and Their Equivalent Circuit Model The equivalent circuit model (ECM) is developed by using a MATLAB code to analyze graphene-based multi-band chiral metamaterial absorbers composing graphene-based square split-ring resonator arrays in the terahertz (THz) range. The absorbers are simulated numerically by the finite element method (FEM) in CST Software to verify the ECM results. Our introduced multi-band absorbers can be used as suitable platforms in polarization-sensitive devices and systems in the THz range. We have designed four tunable graphene-based chiral metamaterial absorbers containing one, two, three, and four gaps in their arms, respectively. The absorber with one gap has four absorption bands (two for TE and three for TM, one band of both modes approximately overlaps) with absorption >50%. The absorber with two gaps has three absorption bands (two for TE and two for TM, one band of both modes approximately overlaps). The absorber with three gaps has four absorption bands (three for TE and two for TM, one band of both modes approximately overlaps). The absorber with four gaps has three absorption bands (three for TE and two for TM, two bands of both modes approximately overlap). They work in the 1-5.5 THz with maximum linear dichroism (LD) responses of 98, 99, 89, and 77%, respectively. The designed absorbers are dynamically tunable. Additionally, by a 90° rotation of the incident electromagnetic fields, it is possible to switch between the number and/or location of absorption bands making these absorbers a promising candidate for future THz systems. ECM results are following the FEM ones. The proposed ECM procedure is a simple and fast way to recognize the characteristics of the designed absorbers. Our proposed absorbers could be promising enablers in future THz systems. I. INTRODUCTION Chiral structures or chiral metamaterials do not superimpose to their mirror images and this feature causes them to produce non-equal or polarization-sensitive responses. This nonequality is measured as chirality responses such as circular dichroism (CD: the difference in absorbance for right-handed circular-polarized and left-handed circular-polarized waves) and/or linear dichroism (LD: the difference in absorbance for TE and TM-polarized waves) [1]. The associate editor coordinating the review of this manuscript and approving it for publication was Jesus Felez . Graphene, a 2D layer of the carbon atom, has excellent properties making graphene a propitious candidate in optoelectronic devices. Graphene based unit cell resonators of chiral metamaterials have been designed and developed recently to produce tunable chirality responses [2]- [8]. These metamaterials have a chirality response of up to 96%. but the development of tunable chiral metamaterial absorbers, which are capable to switch between the number and/or location of absorption bands by only a 90 • rotation of the incident electromagnetic fields is mostly unexplored field of research. Some chiral metamaterial absorbers have been proposed recently [9]- [14] with one or two maximum absorption bands. There is a need for multi-band chiral absorbers in THz communication systems [15] when necessary to absorb more information through multi-bands. Similar needs of multi-band absorbers are in sensing [16], [17], spectroscopy, and imaging [18] applications. However, most of the published chiral metamaterial absorbers are not graphenebased resonators, so their absorption spectra and chirality responses are not dynamically tunable. In these papers, the maximum chirality response reached 88%, and they do not provide any theoretical model for a better understanding of the chiral metamaterial absorbers. Recently some chiral metamaterials containing split-ring resonator arrays were designed for telecommunication and spectroscopy applications [19]- [25]. The resonator designs are based on metals and those metamaterials work mostly in the gigahertz (GHz) region. Only the metamaterial in [23] works in the THz region. Those metamaterials have good absorption properties, but they are lacking dynamical tunability (not containing a graphene or transition metal dichalcogenides resonator layer). They also introduced the ECM approach for the metamaterial except [24], but their ECM approaches differ from ours as the metal resonator array and graphene resonator array have different formulas for the circuit impedances. Our earlier paper [6] reported a multi-band graphenebased chiral metamaterial absorber containing a single-layer U-shaped resonator array with the LD chirality response reaching 94%. While the other work [7] introduced a dual-functional graphene-based chiral metamirror containing two-layered complementary 90 • rotated U-shaped resonator arrays with an LD response reaching 96%. In this work, we propose multi-band graphene-based chiral metamaterial absorbers containing single-layer square split-ring resonator arrays with different numbers of gaps. Compared to the earlier published paper [6], were carried out simulations for a similar one gap structure but in a different frequency range . Compared to the paper [7], simulation and circuit modeling procedures of this work are assumed to be less resource demanding as the proposed structure contains only a simple single resonator layer and dielectric. In the work [6], the ECM approach of the metastructure is based on the relation between S parameters and the elements of the ABCD matrix. The impedances of the graphene pattern are obtained as a function of the pattern reflection, substrate impedance, and electrical length. The gap of the structure is modeled by a parallel capacitor and the ECM of the structure containing ion gel/graphene/dielectric/gold layers is developed by use of the ABCD matrix. In the case of the paper [7], we used equivalent conductivity relations to obtain the conductive characteristics of graphene patterns. Then the transfer matrix elements were determined for the structure to obtain the reflection characteristics of the metastructure, containing graphene/dielectric/graphene/ dielectric/gold layers. In this work, similar equivalent conductivity relations are utilized in the metamaterials containing ion gel/graphene/dielectric/gold layers. The gap(s) of the metamaterials are modeled by a series capacitor(s) and the impedances of the patterns are obtained as a function of reflection, gap length(s), and cosine/secant of input/output incident wave angles. Then, the ECM is developed using the transmission line formula. Proposed tunable chiral absorbers could utilized in applications such as enhancement of chirality responses for chiral biomolecules like DNA and amino acids since the natural biomolecules have weak chirality responses [7], polarization transformers, stealth technology, thermal bolometers [9], wavelength-selective absorption filters, hot-electron collection devices [10], chiral imaging to achieve distinct displayed images by switching the polarization of the incident wave [14] in the THz range. II. METAMATERIAL AND EQUIVALENT CIRCUIT MODEL The periodic and unit cell views of the designed tunable graphene-based chiral metamaterial absorbers composed of square split-ring resonator arrays with different numbers of gaps are given in Figs. 1(a-e). The metamaterials are simulated at room temperature. For the biasing procedure of the graphene resonator patterns, we have used an ion gel layer with a thickness of d ig and the refractive index of 1.42 [26]. The dielectric spacer is made of Teflon with a refractive index of 1.45. The dielectric spacer is backed with a gold layer with a conductivity of 4.56 × 10 7 S/m [27] to prevent the transmission of TE and TM modes. Simulations are done in the CST Microwave Studio 2018 [6], [7]. The dynamically tunable devices work as multiband absorbers with the possibility to switch between the number and/or location of absorption bands by only a 90 • rotation of the incident electromagnetic fields. The considered parameters and their optimized values for the designed metamaterial absorbers are reported in Table 1. We have used the parametric sweep to optimize the results to reach the maximum linear dichroism (LD) response for the proposed chiral metamaterial absorbers in the simulated THz region. In the sweep optimization procedure in CST, we considered that the unit cell dimensions, P x = P y = 20 µm, have to be smaller than λ min = 54.55 µm if f max = 5.5 THz (the maximum frequency in the simulated region) to avoid excitation of high order Floquet modes. The relative permittivity of graphene, by the consideration of the incident electromagnetic wave e jωt , is [6], [7]: (1) VOLUME 10, 2022 in which σ , ω, ε 0 , and are respectively the surface conductivity of graphene, angular frequency, vacuum permittivity, and graphene thickness. has assumed 0.335 nm [28]. σ is the summation of the inter-and intra-band electron transition contributions based on the Kubo formula [6]: in which is the reduced Plank's constant, k B = 1.38×10 −23 J/K is the Boltzmann's constant, e = 1.6 × 10 −19 C is the electron charge, T is the temperature equals to 300 K, and ζ is the integral variable. τ is the relaxation time as [6], [29]: in which v f = 10 6 m/s is the Fermi velocity and µ = 2.22 m 2 /(V.s) is the carrier mobility of graphene. The propagation constant of the electromagnetic wave in a graphene-vacuum configuration is [6], [7]: where k 0 and Z 0 are the wave vector of the incident wave and the vacuum impedance. The equivalent circuit model (ECM) procedure could be summarized as four steps: 1) The impedances (capacitances) of the gaps are calculated. 2) The conductivities of the split ring resonator arrays (the graphene layer) are calculated. 3) The impedances of the graphene sections (not considering the gaps) of the split ring resonators in each metamaterial are calculated and plotted. 4) The TE/TM absorption spectra of the whole metamaterial absorbers are calculated by use of the transmission line formula and compared with the numerically simulated ones. The ECM procedure for the graphene-based chiral metamaterials containing different numbers of gaps is based on the modeling of the graphene resonator array as the equivalent conductivity σ TE/TM SSRRA−i (in which i could be OG, TG, THG, or FG which respectively means the metamaterial containing in which θ in , ε d , θ out , and Z 0 are respectively the angle of the incident electromagnetic wave, the relative dielectric permittivity of the half-space slab, the angle of the transmitted electromagnetic wave, and the free space impedance (120π ). So: The TE/TM impedances of the square split-ring resonator array (SSRRA) Z TE/TM SSRRA−i are obtained by: TM and TE polarized electromagnetic waves are illuminated separately to the metamaterial absorbers composed of SSRRAs with different numbers of gaps in the THz region. The ECMs of the metamaterial absorbers differ for TM and TE waves which proves the chirality nature (lack of mirror symmetry) of the designed metamaterials. For the TM incident wave, the SSRRA of Fig. 1(b), is modeled with an RLC circuit (the graphene section) and the gap of the resonator is modeled by series capacitors (shown in Fig. 2(a)). Additionally, for the TM incident wave, the SSRRAs of Figs. 1(c-e) are modeled with an RLC circuit (the graphene section) and two serials gap capacitances (shown in Fig. 2(b)). So, for the TM mode, only the gaps in the horizontal arms could be modeled by capacitors. For the TE incident wave, the SSRRA of Figs. 1(b, c), is modeled with an RLC circuit (the graphene section; shown in Fig. 2(c)) and the gap(s) are not modeled by the capacitance(s) as the incident electric field is parallel to the gap(s). For the TE incident wave of the absorber in Fig. 1(d) containing three gaps, only g 3 is modeled by a capacitance (shown in Fig. 2(d)). For the TE incident wave of the absorber in Fig. 1(e) which is contained four gaps, only g 3 and g 4 are modeled by capacitances (shown in Fig. 2(e)). In these cases, the incident electric field is normal to the gap(s). So, for the TE mode, only the gaps in the vertical arms could be modeled by capacitors. Each gap is modeled by a capacitance calculated by: in which ε eff is the effective relative dielectric permittivity, w = (L−l) 2 , the gap width, is equal to 4 µm, and g i is the gap length. ε eff is calculated by: in which ε ig is the relative ion gel permittivity. So, C gap1 = 145.78, C gap2 = 97.19, C gap3 = 72.89, and C gap4 = 58.31 PF/m. The impedances of the gaps are calculated by: So, the impedances of the gaps when the incident electric field is normal to the gaps are calculated by: For calculation of Z gOG,TM (impedance of the graphene section; Fig. 2(a)), we have: So, For calculation of Z gTG,TM gTHG,TM gFG,TM (impedance of the graphene section; Fig. 2 therefore, we could generalize that if we have n number of gaps (g 1 , g 2 , . . . , g n ) in the horizontal arms of the metamaterial absorber, σ g,TM would be: For calculation of Z gTHG,TE (impedance of the graphene section; Fig. 2 For calculation of Z gFG,TE (impedance of the graphene section; Fig. 2(e)), we have: consequently, we could generalize that if we have n number of gaps (g 1 , g 2 , . . . , g n ) in the vertical arms of the metamaterial absorber, σ g,TE would be as: So, Therefore: Z gTG,TM = Z gTHG,TM = Z gFG,TM TE/TM equivalent conductivities of the graphene sections in each metamaterial differ. For example, the structure containing one gap, equations (17) and (25) are respectively the equivalent conductivities of the graphene section in TM and TE modes. This is because of the asymmetric nature (nonmirror symmetry) of the proposed chiral metamaterials. In Fig. 3, the designed ECM of the proposed graphenebased chiral metamaterial absorbers is presented. The equivalent transmission line model and the input impedance of each section of the proposed chiral metamaterial absorbers are given in Fig. 3. The thickness of the graphene layer is ultra-thin compared to the spectrum wavelength and it was assumed as a point load [31], [32]. The equivalent impedances of the different parts of the absorbers are as follows [31]: (42) in which Z TE/TM ig and β ig are respectively the TE/TM impedances of the ion gel layer and the propagation constant of the THz electromagnetic wave in the ion gel layer. in which θ d is the electrical length of the dielectric layer: The scattering parameters S TE/TM 11SSRRA−i are calculated by: Fig. 1(b)). (d) Absorption spectra and E-field distributions in (e) TE and (f) TM modes in 1.91 THz of the metamaterial containing two gaps ( Fig. 1(c)). (g) Absorption spectra and E-field distributions in (h) TE and (i) TM modes in 1.73 THz of the metamaterial containing three gaps ( Fig. 1(d)). (j) Absorption spectra and E-field distributions in (k) TE and (l) TM modes in 2.67 THz of the metamaterial containing four gaps (Fig. 1(e)). Results are based on assumption that E f = 0.9 eV. III. RESULTS AND DISCUSSION We performed numerical simulations by the finite element method (FEM) in the frequency domain solver of CST 2018 [5]- [7], [33], [34]. The considered simulation setups like boundary conditions and the mesh sizes are the same as in [5]- [7]. To excite the metamaterial absorbers, TE and TM electromagnetic waves were launched to them in the z-direction [5]- [7], [35]- [37]. The TE/TM equivalent conductivities of the SSRRAs, σ g,TE/TM , were calculated by simulating the SSRRAs on the half-space slab made of Teflon with a thickness of 500 µm and by using Equations (17), (20), (25), (28), and (31). The real and the imaginary parts of the equivalent conductivities of the SSRRAs in TE mode are respectively given in Figs. 4(a) and 4(b). As it is shown, the real and the imaginary parts of the metastructures containing one and two gaps are equal. The real and the imaginary parts of the equivalent conductivities of the SSRRAs in TM mode are respectively given in Figs. 5(a) and 5(b). As it is shown, the real and the imaginary parts of the metastructures containing two, three, and four gaps are equal. As given in Figs. 4(a) and 5(a), the real parts of the equivalent conductivities are positive in the whole frequency range representing the loss and the resistive nature of the patterned graphene layers. The imaginary parts of the equivalent conductivities of the SSRRAs are given in Figs. 4(b) and 5(b). They contain both positive and negative parts which means respectively the inductive and capacitive natures of the SSRRAs are dominant. So, we model each SSRRA as a series RLC circuit. Moreover, the TE and TM equivalent conductivities are not equal in each of the metamaterials. This is because of the nonasymmetric (chirality) or polarization-sensitive nature of the proposed metamaterials. The TE/TM reflection spectra and the E-field distributions of the proposed chiral metamaterial absorbers of Fig. 1 were given in Fig. 6. As an interesting feature, by a 90 • rotation of the incident electromagnetic fields, it is possible to switch between the number and/or location of absorption bands in 1-5.5 THz which makes these designed chiral absorbers promising candidates for future THz systems. The average of maximum absorptions of the absorption bands in this work for the metastructure containing one gap ( Fig. 1(b)) reaches 95% while the average of maximum absorption bands of [8] which has a U-shaped resonator array reaches 87.5%. In each metamaterial with a different number of gaps, the electric field distributions are calculated and given in one of the resonance frequencies for both TE and TM modes representing the chirality nature of the metamaterials. For example, the E-field distributions of the metamaterial containing one gap ( Fig. 1(b)) for both TE and TM modes in . Linear dichroism (LD) spectra of the metastructure of (a) Fig. 1(b), (b) Fig. 1(c), (c) Fig. 1(d), and (d) Fig. 1(e) for three different values of E f . 3.6 THz (resonance absorption for TM mode) are respectively given in Figs. 6(b) and 6(c). The distributions are not equal for TE and TM modes in 3.6 THz which shows the chirality nature (asymmetry) of the metamaterial. As shown in Fig. 6(b), the metamaterial does not absorb the TE wave noticeably. As shown in Fig. 6(c), TM mode is absorbed greatly in the metamaterial. The maximum linear dichroism (LD) reaches 98, 99, 89, and 77% for the metamaterial absorbers containing one, two, three, and four gaps respectively (Figs. 1(b-e)). The absorption spectra and the LD responses are dynamically tunable in 1-5.5 THz by only changing the applied bias voltage without the need to refabricate the metamaterial absorbers which serve the material, cost, and time. To determine the type of resonances (electric or magnetic) [38], the surface current distributions of the proposed metamaterials with different resonance frequencies were determined and the obtained results summarized in Table 2. Some representative examples are shown in Fig. 7. In Fig. 7(a), the surface currents on the graphene metasurface (left to right) have the opposite direction compared to those of the gold ground plane (right to left). Therefore, the resonance type is magnetic. In Fig. 7(b), the surface currents on the graphene metasurface and the ground plane are not in opposite directions. Also, the surface currents on the graphene patterns are not making a closed loop. So, this resonance is electric type. In Fig. 7(c), the surface currents on the graphene metasurface (up to down) have the opposite direction compared to those of the gold ground plane (down to up). Therefore, the resonance type is magnetic. The resonances in Figs. 7(d) and 7(e) are also magnetic types because the surface currents on the graphene metasurface make a closed loop. The TE/TM absorption spectra of the metamaterial, obtained by CST (numerical approach) and Equation (53) (the ECM approach in MATLAB), are shown and compared in Fig. 8., The results, obtained by those two methods, are in good agreement thus proving that the proposed EMC model is a valid approach to predict their resonance performance. The LD (the difference between TE and TM absorption/reflection spectra [39]: LD = A TE − A TM ) vs E f spectra for the metastructures of Figs. 1(b-e) are given in Fig. 9. As shown, by increasing E f , the resonance frequency values exhibit a blueshift. This is because the real part of the β in Equation (4) decreases as the E f increases [5]. So, the resonance values of the LD spectra increase by the increase of E f . Comparison with previously published chiral metamaterial absorbers/metamirros is summarized in Table 3. The fabrication of the proposed metamaterial absorbers is not in the scope of this work, but its procedure could be the same as explained in our previous work [6]. IV. CONCLUSION In this work, an equivalent circuit modeling (ECM) approach based on equivalent conductivities of graphene in TE/TM modes and transmission lines is being proposed and developed for the tunable graphene-based chiral metamaterial absorbers consisting of square split-ring resonator arrays with different numbers of gaps by using simple and fast MATLAB code in the terahertz (THz) region. The simulations are performed using the finite element method (FEM) in CST Microwave Studio Software. Simulation results are in good agreement with the ECM ones. Our designed chiral metamaterials are dynamically tunable, and they have maximum linear dichroism (LD) responses up to 98, 99, 89, and 77% respectively for the designed metamaterials containing one, two, three, and four gaps. Our designed multi-band chiral absorbers in the 1-5.5 THz range are promising candidates for future THz systems. The possibility to switch between the number and/or location of absorption bands makes our proposed metamaterial absorbers promising enablers in tunable polarization-sensitive THz structures and systems in the future. APPENDIX The incident angle dependence of TE/TM absorption spectra of the metamaterial was modelled by CST and Equation (53) (ECM). Obtained results are shown for two different incident angles of θ in = 0 • and 30 • in Fig. 10. The results are in good agreement. Comparing the response for θ in = 0 • and 30 • , we can see that the change of the incident angle do not influence on the resonance frequencies and their the magnitude of absorption varies only slightly. Therefore, the absorber structures for both TE and TM modes are not incident angle dependent.
5,111.4
2022-01-01T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Electrically driven organic laser using integrated OLED pumping Organic semiconductors are carbon-based materials that combine optoelectronic properties with simple fabrication and the scope for tuning by changing their chemical structure1–3. They have been successfully used to make organic light-emitting diodes2,4,5 (OLEDs, now widely found in mobile phone displays and televisions), solar cells1, transistors6 and sensors7. However, making electrically driven organic semiconductor lasers is very challenging8,9. It is difficult because organic semiconductors typically support only low current densities, suffer substantial absorption from injected charges and triplets, and have additional losses due to contacts10,11. In short, injecting charges into the gain medium leads to intolerable losses. Here we take an alternative approach in which charge injection and lasing are spatially separated, thereby greatly reducing losses. We achieve this by developing an integrated device structure that efficiently couples an OLED, with exceptionally high internal-light generation, with a polymer distributed feedback laser. Under the electrical driving of the integrated structure, we observe a threshold in light output versus drive current, with a narrow emission spectrum and the formation of a beam above the threshold. These observations confirm lasing. Our results provide an organic electronic device that has not been previously demonstrated, and show that indirect electrical pumping by an OLED is a very effective way of realizing an electrically driven organic semiconductor laser. This provides an approach to visible lasers that could see applications in spectroscopy, metrology and sensing. The manuscript by Yoshida and co-workers describes the realization of an integrated device allowing for electrically-driven organic lasing in an all-organic structure consisting of a thin OLED with doped injection layers and an organic (BBEHP-PPV) DFB laser structure.Electrically-injected organic laser has long been a holy grail of organic semiconductor research, but has been hampered by various effects such as the build up of long-lived triplets, polarons and poor thermal stability of most organic semiconductors.The first "real" evidence for electrically-driven organic laser was recently published in Ref. 26.However, devices in that work were very short-lived (10-50 pulses) and yield was low.In this submission, Yoshida effectively bypass the many difficulties related to electrical injection by optically pumping the organic laser with an OLED integrated in close proximity.While neither the OLED nor the organic laser structure show significant advances over previous reports, this is quite an engineering feat, which required solving some integration issues, with an end result that I believe is appropriate for Nature.While in some sense, this is very similar to previous work from the Samuel group where organic lasers were integrated with inorganic LEDs, the fact that this would also be achievable with an OLED was far from obvious.Whether such a device will be of practical use remains to be seen. The work is relatively straightforward and the numbers add up in such a way that I have confidence that the results are correct.Still, I do have a number of recommendations and questions.While I did not have any major problems following the manuscript, I think that it could be written more clearly (and directly) in a way that is a better fit for the readership of Nature. Recommendations: -The title is succinct, but I do not feel it correctly reflects the content of the work.One could also have used the same title for previous work from the same group using GaN LED or laser diode pumping.I suggest highlighting the fact that the structure is all organic, while also avoiding the insinuation that this is an organic laser diode. -The complete absence of uncertainties (especially given the topic) in the manuscript is somewhat shocking.Many values are given to 3 significant digits, with no uncertainty and not a single error bar can be found on the plots.Type A and B uncertainties on the various measured quantities should be reported/combined using statistical analysis and calibration reports and details of how uncertainties have been calculated should be reported. -The most basic measures of laser performance are the slope efficiency and wall-plug efficiency.There are no actual values on the y axis for any of the electrically-driven laser plots.This should be fixed. -I do not think it is appropriate for the authors to use 7 lines of their introduction to denigrate Ref. 26, in what seems like an attempt to increase the importance of their own work.I believe that simply highlighting that the diodes in the one report on OSLDs are extremely short-lived (tens of pulses) and that yield was low is sufficient. -It would be nice to include photostability data for the OLED to accompany Ext.Data Figure 8. -The use of radiant exitance for the output intensity is somewhat esoteric.In the 1000s of papers on LEDs since the 1960s, I do not ever recall seeing its use for the EL Intensity (despite the radiometric meaning being correct).In the field of infared LEDs, "emittance" is sometimes use, but is also less common than Intensity/Irradiance. -While the peak brightness (48W/cm2 at 7kA/cm2) achieved from the OLED is quite good, the performance seems on par with other reports, which show peak brightnesses of around 10W/cm2 at 0.5-1kA/cm2.The only difference is that the OLEDs in this manuscript were driven a little bit harder (and with shorter pulses).I would also re-check the conversion for Refs.103 and 108, which report luminances of 1.5-2,000,000 cd/m2, which taking peak spectral values gives me intensities of approximately 10W/cm2. Minor comments: -The role of the MoO3 buffer layer is unclear. -Why was 2.0 chosen as the out-coupling simulation cut-off (Methods)?Evanescent waves beyond this wavevector might still couple to the electrodes and thus lead to dissipated power, but not to emitted power (I do not think this will be the case, but the choice should be justified-the cutoff is effectively fixed by the SPP mode frequencies and the physical separation for lossy surface waves). -Why is there a wiring electrode for anode 1 *and* 2 in Fig. 1c? -In Fig 3c and d x-axis titles should be electrically and optically pumped.The x-axis title on the right panels is difficult to see. -I recommend tidying up the text for readability and clarity. In conclusion, I think that this work is technically sound (barring some fixable issues with the manuscript).While the work does not present any major advances in high-brightness OLEDs or organic laser design, it does make major advances in integration, which allow the authors to realize a remarkable device allowing for OLED-driven lasing of an optically-pumped organic semiconductor laser.This is such an important result that I do think it will be of interest to the readership of Nature. Referee #2 (Remarks to the Author): Organic semiconductor laser (OSL) and OLED architectures have been extensively studied for more than 30 years since the first reports of F. Hide et al. in 1997 andC.W. Tang et al. in 1987, respectively.This paper is an exciting report on the extremely well-elaborated organic laser device architecture demonstrating lasing by indirect excitation of an organic lasing media (OSL) by an OLED.The original device architecture having multiple layers was accomplished after overcoming many technical issues such as coupling the light into the lasing area and decreasing the device resistivity including wiring, in addition to the very short pulse driving method of 5 ns.All technological integration is impressive, realizing indirect pumping of the PPV polymer-based OSL by the fluorenebased OLED.Technologically this is a significant breakthrough in organic optoelectronics.Further, the estimated current threshold is reasonably consistent with the previously reported OSLD by Sandanayaka et al. (Appl. Phys. Express, 2019). 1) The title should be changed.The title of "An electrically driven OSL" misleads to the direct pumping of an organic active layer by current injection.This device uses an indirect pumping method.I think it is important to separate direct and indirect pumping clearly in the title. 2) The fundamental I-V-E QE characteristics are missing.The DC I-V-EQE and pulsed I-V-EQE should be provided.I think the breakdown behavior in both DC and pulse operation should be provided.In particular, I expect the unique rolloff behavior with short pulse operation.Maybe the comparison of different pulse duration provides a unique exciton deactivation mechanism from the aspect of joule heating, electrical field quenching, or triplets contribution. 3) I think two important papers would be cited fairly in the document.CW Tang's first OLED paper, APL (1987) F. Hide's first polymer laser, Science (1997) 4) The refractive index values of each layer should be summarized, indicating the confinement of light in the OSL layer. Summary Achieving electrical pumping of organic semiconductor lasers has long been considered a formidable challenge in organic electronics, since the first attempts more than 25 years ago.Here the authors use an approach based on an indirect electrical pumping, that is in fact an optical pumping by an Organic Light-Emitting Diode (OLED), which has never been shown before.To achieve this goal, the most important challenge is to fabricate an OLED that has a high enough exitance (power per unit area) to reach the threshold of organic lasers, which should be between several tens to around a hundred of Watts per cm² according to the present state-of-the-art.In a previous paper this group had shown that inorganic LEDs could be used for organic laser pumping, but these sources can reach peak exitances in the kW/cm² range, whereas OLEDs, especially in the blue, are limited to a few Watts per cm².In order to realize this breakthrough, the authors had to combine several elements : a) a very good laser material (BBEHP-PPV) ; b) an optimized so-called "sub-structured" distributed feedback resonator (which was the one used the demonstrate the lowest reported threshold to date of 15 W/cm² in an organic laser) ; c) an OLED that is directly connected to the organic laser part through high-index materials to avoid the outcoupling losses to air ; d) a clever approach to combine the OLED and the laser involving pressing through a parylene interlayer ; e) a blue OLED that is pushed to an exitance of 46 W/cm² thanks to a short pulse operation of only 5 ns, doped transport layers and a very efficient low-lifetime spirobifluorene material.The laser presents a threshold of 2,83 kA/cm², and interesting experiments are presented in the end of the paper that compare optical, electrical and mixed optical/electrical pumping of the same device. General opinion This study presents the first laser to be optically pumped by an all-organic incoherent source (that it itself electrically excited), but this does not make this device an electrically-pumped laser.Indeed, the title chosen for the paper is in my opinion misleading.In semiconductor laser physics, the term "electrical pumping" clearly refers to as a process in which the population inversion is created by the injection of a current into the device, which is not the case here.Although terms like "electrical injection" or "organic laser diode" are safely avoided, the term "electrically-driven" chosen in place of "electrically-pumped" contributes to make things unclear, as it suggests that the device is not indeed electrically pumped but rather just "driven" by electricity, which is obviously indeed the case of any laser at some point.I think this contribution would better be described as an indirect electrical pumping of an organic semiconductor laser by an organic light-emitting diode.Even though the title is certainly not adequate, I think this work is very important as 1) it sets a record for the peak exitance of an OLED in the blue around 50 W/cm², reaching a level where OLEDs can be used as pump sources for lasers; 2) it paves the way towards an organic laser device that would be fully realized with organic electronics technologies.There is still obviously a lot of work ahead to make it a practicable device (here both evaporation and solution-process are used for the OLED and the laser respectively, and the final structure consists in two half-devices that are mechanically pressed together), but one can consider that the direction is now shown for exciting future research.This work certainly deserves being published in a high-impact journal, however I am not sure that Nature is the most suited journal for this paper, considering the two following arguments: 1) This is not a demonstration of an electrically-pumped organic laser, as shown before. 2) Optical pumping by an OLED has been rendered possible here by an extremely clever combination of state-of-the-art concepts in chemical, electrical and photonic engineering, gathered all at once in a single device.This breakthrough has not however been made possible by any disruptive novel concept, as the molecular design, the short pulsed operation of OLEDs or the low-threshold organic laser had been published separately before.These two reasons bring me to the conclusion that although this work will be of an indisputable interest for the organic electronics community, it might have a more restricted impact for the rest of the scientific community.I would then recommend a publication in another journal such as Nature Communications or Nature Photonics. Questions and remarks on specific points The paper is excellent and is written in an exquisitely simple, precise style.I wish the paper could contain more information about the device, notably its laser beam characterization.The device fabrication requires many steps, so it would be useful for any group willing to reproduce the data to have some practical information such as : how many devices were needed for one device to work ?Were the devices reproducible ?I have a few specific remarks or questions that should be addressed : • Line 42 : The introduction nicely puts the work in perspective and reminds that up to now almost all organic lasers were pumped by other lasers.The authors should however mention here their own work, cited hereafter as ref 27, on the indirect pumping of organic lasers by inorganic LEDs.This would better show how this study is a contribution in the reduction of the power density needed to operate organic lasers. • Line 96 : The choice of a double layer of parylene and nanolaminates is not justified and the reader is sent back to ref 29 : a brief description of the reason of this choice, which seems very complicated at first sight, would be welcome • Line 121 : It is not clear why the high current density of 10 kA/cm² is rejected from the data of ref. 30 : is there a limit in current density that the device should not overcome, based for instance on degradation arguments ?• Line 147 : There should be a comment on the differences between short optical pulses obtained from an LED and an OLED : why is it easier to obtain shorter rise times compared to LEDs ?• Line 225 : as explained before, I don't agree with the term "electrical pumping" used here.Actually there is no reason that optical pumping by OPO or by OLED would be that different provided that they have similar durations and that the absorption is the same. • Fig 1 : The 130 µm distance represented by the double arrow is not consistent with the direction shown for the DFB grating grooves, this is a little bit confusing • Fig 2 : I think the characterization of the OLED could be more complete.In particular there should be an I-V curve obtained in CW conditions, and compared maybe with the same under pulsed operation.This manuscript reports an electrically-pumped organic laser.Although we still don't know what an organic laser might be good for, this result is a remarkable achievement that many considered to be impossible.It represents perhaps the culmination of 20-30 years of work on organic EL.This is not the first report of an organic electrically-pumped laser (Chihaya Adachi, et al. have a publication from 2019).But that device was problematic in many ways.There are arguments about whether it truly describes a laser, and the device was very unstable, lasting for only a handful of pulses. The approach by Yoshida, et al. is different, and given the significant step forward in performance, I think it is a notable advance conceptually as well as practically.This device features a separate OLED pump and optical gain region.The separation minimizes losses in the optical gain structure. Technologically, the device is one of the most sophisticated structures ever built from organic semiconductors.There are 17 layers in the cross section shown in Fig. 1b.It combines a substructured grating, solution processed gain medium, and most importantly, an OLED capable of operation at nearly 10kA/cm2.I found the threshold, beam, and joint EL-optical measurements to be compelling. I have only a couple of suggestions: 1.In Fig. 2a, the purple curves are linked to the right-hand axis, but it looks like this is unnecessary given that the spectra are all normalized. 2. Given the sophistication of the device it is not surprising that significant modeling was performed (described on pages 18-20).For the field to move forward, it will definitely need predictive modelling of these structures. In particular: (i) It would be very interesting to see graphical results from the EM simulations including the predicted spatial distribution of output power from the OLED and the overlap with the lasing mode.That would provide a simulation of the confinement etc... and a sense of the coupling efficiency between the OLED and laser.There is discussion on pg 10 about the coupling between the gain region and the pump in the integrated device relative to separated structures.But some sense of the absolute efficiencies, losses etc... would be very useful.If possible, a spatial plot of the losses would also be valuable. (ii) I'm surprised that the authors assumed a Lambertian pattern especially given the peaky output of the OLED, which is presumably due to the weak cavity formed by the anode and cathode.Why was it necessary to assume a particular emission pattern?One imagines that the emission pattern could be calculated. (iii) It would be interesting to compare the modelling results (especially the predicted coupling between OLED & cavity) to the observed threshold. 3. Readers may benefit from some context.The concept of a separated pump and lasing cavity has a long history in lasing.Diode pumping is used in many conventional systems today (especially highpower lasers?).In organics, the concept dates at least to PRB 66, 035321 (2002), perhaps earlier. 4. Finally, we have struggled since the early days of optically-pumped organic lasers to identify the unique advantages of organic EL lasers.The immediate impact of this work is to demonstrate that they are possible.The closing speculation about a few potential, and rather obscure, applications struck me as a potential distraction from the main result. Reviewer #1 (Remarks to the Author): 1-1 The title is succinct, but I do not feel it correctly reflects the content of the work.One could also have used the same title for previous work from the same group using GaN LED or laser diode pumping.I suggest highlighting the fact that the structure is all organic, while also avoiding the insinuation that this is an organic laser diode. Following the Reviewer's suggestion, we modified our title to be : "Electrically driven organic laser using integrated OLED pumping".We believe the new title clearly differentiates the paper from prior work, conveys it is all-organic and operated by electrical drive of the OLED. 1-2 The complete absence of uncertainties (especially given the topic) in the manuscript is somewhat shocking.Many values are given to 3 significant digits, with no uncertainty and not a single error bar can be found on the plots.Type A and B uncertainties on the various measured quantities should be reported/combined using statistical analysis and calibration reports and details of how uncertainties have been calculated should be reported. Following the Reviewer's suggestion, we assessed type A and B uncertainties of the measured quantities in our experiments, and calculated the extended uncertainties with the coverage factor k = 2, which shows the interval of 95% confidence in the normal distribution.For example, we considered uncertainties in the calibration scale of the microscope for the OLED size measurements, the reading resolution of the oscilloscope, statistical variation in repeated measurements, as well as uncertainties in the linear fits of laser output below and above threshold, and their intersection, to estimate lasing threshold.Table 1.1 shows an example of uncertainties to measure current density, for which we previously expressed to 3 significant figures as the Reviewer mentioned.The extended uncertainties for the current density of 6.3 kA/cm 2 is ±0.4 kA/cm 2 corresponding to a fractional uncertainty of 6%.This uncertainty is mainly due to the calibration scale in the microscope used for the OLED size measurement.We have now restated the current densities to 2 significant figures with uncertainties.We have revised the values in the manuscript and its figures to show relevant uncertainties in our measurements.As an example, in Response Fig. 1.1 below we revised manuscript Fig. 2c to show estimates of peak radiant exitance averaged over interpolated data of 11 pixels, and their extended uncertainties in both current density and radiant exitance.We note that in order to show variations of light output from different OLED pixels within the same fabrication run, we have changed our benchmark current density used for the maximum radiant exitance of the PNPN-OLED to a lower value, i.e., from 6.7 kA/cm 2 (previous version of this paper) to 6.3 kA/cm 2 (this version). We also added the following sentences in the label of Fig. 2c. Black symbols show the data of 4 different pixels fabricated in the same batch.Red symbols show estimates of peak radiant exitance of the PNPN-OLED at 5 kA/cm 2 and 6.3 kA/cm 2 , averaged over interpolated data of 11 pixels including the 4 pixels shown in the figure and 7 pixels from a different batch.Error bars are extended uncertainties of the measurement with a coverage factor of 2. Also, we added a section in the Methods to describe evaluation of measurement uncertainties, 'Measurement uncertainties in OLED and laser characterization': Uncertainties in our measurements are expressed as extended uncertainties for an interval of 95% confidence (coverage factor k = 2).These take account of combined uncertainties in the calibration of the energy meter and the scale of the microscope used for the OLED size measurements, the reading resolution and time resolution of the oscilloscope, pixel to pixel variations of OLED light output and variations in repeated measurements, as well as uncertainties in the linear fitting of laser output to estimate lasing threshold.The uncertainties in radiant exitance, EQE, and the laser threshold under optical pumping include a 10% calibration uncertainty of the energy meter.The uncertainty in threshold current density is mainly due to the calibration scale in the microscope used to measure the size of the OLED. Response Figure 1.1, (Revised Fig. 2c).Black symbols shows the data of 4 different pixels fabricated in the same batch.Red symbols show estimates of peak radiant exitance of the PNPN-OLED at 5 kA/cm 2 and 6.3 kA/cm 2 , averaged over interpolated data of 11 pixels including the 4 pixels shown in the figure and 7 pixels from a different batch.Error bars are extended uncertainties of the measurement with coverage factor of 2. 1-3 The most basic measures of laser performance are the slope efficiency and wall-plug efficiency.There are no actual values on the y axis for any of the electrically-driven laser plots.This should be fixed. We regard the most basic measures of laser performance to be threshold and slope efficiency, and agree these should be clearly stated in the paper.While the pulse energy should also be clearly stated, it does not need to be on all graphs, and we prefer to stay close to the quantity measured by the instrument.A similar approach has been taken in many high-quality laser papers in the Nature family that show power characteristics recorded with spectrographs in uncalibrated units (and many do not actually include calibrated efficiency): (1) Nature Photon 11, 784-788 (2017).https://doi.org/10.1038/s41566-017-0047-6Perovskite-CW laser (Prof Giebink); (2) Nat Commun 11, 271 (2020).https://doi.org/10.1038/s41467-019-14014-3Lasing from colloidalquantum dot in LED like device structure (Dr Klimov); (3) Nature 585, 53-57 (2020).https://doi.org/10.1038/s41586-020-2621-1Room temperature CW lasing from perovskites (Prof Adachi); (4) Nat.Photon.14, 452-458 (2020).https://doi.org/10.1038/s41566-020-0631-zIntracellular laser (Prof Gather); (5) Nat.Photon.15, 738-742 (2021).https://doi.org/10.1038/s41566-021-00878-9Lasing from qunatum dot based on PbS (Dr Konstantatos)] We have therefore added sentences on the pulse energy and slope efficiency of our laser in section 6 as follows: We identify this change as the threshold current density of about 2.8 kA/cm 2 .The FWHM of our laser for a current density of 4.9 kA/cm 2 is 0.09 nm, limited by the spectral resolution of the measurement system.The maximum output pulse energy was (1.5 ± 0.1) × 10 -5 nJ, and we calculate the slope efficiency (peak optical power / peak input current) of the laser to be 2.1 ± 0.2 µW/A.The laser efficiency is currently limited by two factors: a significant roll-off in OLED quantum efficiency under intense short-pulse operation and a low ratio of surface out-coupling to other losses in the DFB laser cavity.Further refinement of the cavity design, and a better understanding of the dynamics of OLEDs under nanosecond pulsed operation, should each lead to significant future improvements in laser efficiency. We also added text in the 'Electrically driven laser characterization' section in Methods: The output pulse energy and slope efficiency of the electrically driven laser were determined by calibrating the response of the CCD camera to a calibrated energy meter (J3S-10, Coherent); the organic laser pulse duration was measured to be 2.3 ± 0.1 ns using a silicon avalanche photodiode (APD430A2, Thorlabs). We have not included wall-plug efficiency in the characterisation because the key result of this paper is to show we can exceed threshold.Optimisation of wall-plug efficiency will require developments listed above to increase slope efficiency and operate further above threshold.Ref. 26, in what seems like an attempt to increase the importance of their own work.I believe that simply highlighting that the diodes in the one report on OSLDs are extremely short-lived (tens of pulses) and that yield was low is sufficient. 1-4 I do not think it is appropriate for the authors to use 7 lines of their introduction to denigrate We were actually trying to give a balanced account of the work, and want to mention some of the positive features such as the clever materials design.We have reconsidered the 7 lines and shortened them to 5 lines as follows: Adachi and co-workers found that the absorption spectrum of polarons and triplets of a carbazole based laser material did not overlap with the gain spectrum 24 , so that one of the above problems could be overcome 25 .They showed some features of lasing, including narrowing of the emission spectrum.However, the emitted beam was not very clear, and the yield of these devices was low (5%) and their stability was very poor (operated for 20 pulses above threshold). 1-5 It would be nice to include photostability data for the OLED to accompany Ext.Data Figure 8. We have added operational lifetime data for the OLED to Ext.Data Fig. 8. 1-6 The use of radiant exitance for the output intensity is somewhat esoteric.In the 1000s of papers on LEDs since the 1960s, I do not ever recall seeing its use for the EL Intensity (despite the radiometric meaning being correct).In the field of infared LEDs, "emittance" is sometimes use, but is also less common than Intensity/Irradiance. Extended Data We agree that radiant exitance is not widely used, but it is the correct term for the key quantity in our study which is the power per unit area leaving the OLED.One reason for its rarity in the OLED field is that nearly all OLED work is for displays and lighting and so uses photometric units.Alternatives to radiant exitance such as intensity and emittance are used in several ways and so carry some ambiguity which we wish to avoid.Irradiance has the same units but is a different quantity -the power per unit area falling on a surface.In our integrated device, the distinction between power per unit area leaving the OLED and power per unit area falling on the laser is very important and so requires a greater precision of terminology than may have been used in the past. 1-7 While the peak brightness (48W/cm2 at 7kA/cm2) achieved from the OLED is quite good, the performance seems on par with other reports, which show peak brightness's of around 10W/cm2 at 0.5-1kA/cm2.The only difference is that the OLEDs in this manuscript were driven a little bit harder (and with shorter pulses).I would also re-check the conversion for Refs.103 and 108, which report luminances of 1.5-2,000,000 cd/m2, which taking peak spectral values gives me intensities of approximately 10W/cm2. We note that reference 103 (Ahmad, V. et al.) is now reference 100 in the revised manuscript; and reference 108 (Shukla, A. et al.) is now reference 105 in the revised manuscript. We believe the calculated light output of the OLEDs in these publications are not around 10 W/cm 2 .Although the detail of the calculation is described in the 'Literature survey of device performance and details of data collection'.Here, we show our calculation of their radiant exitances as an example step-by-step. In ref. 103 (Ahmad,V. et al.), they achieved maximum brightness (L) of 3,000,000 cd/m 2 .The peak wavelength of 550 nm is given in a different paper (Burns, S. et al. Scientific Reports 7, 40805 (2017)).By assuming monochromatic light at 550 nm, we converted photometric units to radiometric units.We note that the photopic response at 550 nm is 0.9949.Finally, radiant exitance was calculated by expanding brightness to all emitting angle by assuming a Lambertian emission pattern, i.e., πL. 𝑅𝑎𝑑𝑖𝑎𝑛𝑡 𝑒𝑥𝑖𝑡𝑎𝑛𝑐𝑒 = × 683 * (ℎ 550 ) =13,900 W/m 2 = 1.39 W/cm 2 In ref. 108 (Shukla,A. et al.), they achieved L of 1,500,000 cd/m 2 and EQE (hEQE) of 0.85% at a current density(J) of 90 A/cm 2 .A peak wavelength ( ) of 565 nm is given in the same paper.By assuming monochromatic light at 565 nm, we converted the EQE and the current density to the radiant exitance.We note that we used EQE and current density instead of luminance to estimate radiant exitance as the values calculated from luminance may vary significantly depending on the emitter spectrum as mentioned in the Methods.The radiant exitance is calculated from production rate of photons estimated from EQE and current density and then considering the energy of each photon. where h is the Planck constant, c is the speed of light. We also note that it is not trivial to make OLEDs fast enough to respond to shorter pulses, and that our device has both double the radiant exitance of the previous record OLED (at any wavelength) and gives more than ten times higher light output than the previous record deep blue OLED in the region of 430 nm. Minor comments: 1-8 The role of the MoO3 buffer layer is unclear. The role of this layer is to prevent damage of the active area when coated with epoxy.We mention this in 'PNPN-OLED fabrication' section in Methods. 1-9 Why was 2.0 chosen as the out-coupling simulation cut-off (Methods)?Evanescent waves beyond this wavevector might still couple to the electrodes and thus lead to dissipated power, but not to emitted power (I do not think this will be the case, but the choice should be justified-the cutoff is effectively fixed by the SPP mode frequencies and the physical separation for lossy surface waves). We are confident that a normalised wavevector of 2.0 (twice the in-plane wavevector in the light-emitting layer of the OLED) is large enough to capture the evanescent waves in the device.This is illustrated in Response Figure 1.2 which shows our calculation of the dissipation spectra at 430 nm in the PNPN-OLED.The evanescent SPP mode and lossy surface waves have negligible power beyond an in-plane wavevector of 1.2.In selecting the value of 2.0 for our calculations we followed the approach in Ref [52], but expanded the normalized wavevector range beyond their maximum value of 1.6.Following the reviewer's suggestion, we added a justification of cut-off in the 'Calculation of outcoupling efficiency' in Methods: Response The simulation was conducted within a wavelength range from 300 nm to 800 nm with a 1 nm step and, at each wavelength, dissipation powers were calculated for normalized in-plane wavevectors from 0.0 to 2.0 with a step size of 0.002.This maximum in-plane wavevector was chosen to ensure that the calculation captures the dissipation into all evanescent modes in the device and is a wider range than previously used in related OLED calculations in Ref. 52.We confirmed that the evanescent SPP mode of the device has negligible power beyond this range.In the model, the PNPN-substrate was included in OLED stack; optical constants of parylene and nanolamination layers were obtained from ref 29 .1-10 Why is there a wiring electrode for anode 1 *and* 2 in Fig. 1c? We intended to refer to the different wiring electrodes that contact the anode (and cathode) rather than "anode 1 and 2".For clarity, we have revised the label of the wiring electrode to "wiring electrode 1 for cathode", "wiring electrode 2 for anode" and "wiring electrode 3 for anode".We revised Fig. 1 as shown in Response Figure 1.3 and the sentences in 'PNPN-OLED fabrication' for clarity as follows: On the PNPN-substrate, 15-nm-thick molybdenum trioxide (MoO3, Merck) was evaporated through a shadow mask.Then, 1.1 µm-thick aluminium(Al) was evaporated through another shadow mask (wiring electrode mask) to form wiring electrode 1 for the cathode and wiring electrode 2 for the anode (see Fig. 1c).Aluminium was evaporated with a box heater with a crucible (EVCH5 and EVC5INTSPL01, Kurt J. Lesker) at rates of around 0.1 nm/s for the initial 100 nm and the rate was then gradually increased up to 2 nm/s.After the HTL evaporation, the evaporation chamber was vented to swap and introduce materials for the following evaporation.Around 5 µm thick aluminium was evaporated to form the cathode and wiring electrode 3 for the anode, with another shadow mask (top electrode mask).The Al evaporation was split to three sections to prevent the OLED warming up during the evaporation. 1-11 In Fig 3c and d x-axis titles should be electrically and optically pumped. The x-axis title on the right panels is difficult to see. We revised the axis labels to be 'electrically driven' and 'optically pumped' and made the x-axis title on the right panels larger as shown Response Figure 1.4. 1-12 I recommend tidying up the text for readability and clarity. We hope that the revisions made to the manuscript improve its readability and clarity. In conclusion, I think that this work is technically sound (barring some fixable issues with the manuscript). While the work does not present any major advances in high-brightness OLEDs or organic laser design, it does make major advances in integration, which allow the authors to realize a remarkable device allowing for OLED-driven lasing of an optically-pumped organic semiconductor laser.This is such an important result that I do think it will be of interest to the readership of Nature. Reviewer #2 (Remarks to the Author): 2-1 The title should be changed.The title of "An electrically driven OSL" misleads to the direct pumping of an organic active layer by current injection.This device uses an indirect pumping method.I think it is important to separate direct and indirect pumping clearly in the title. Following the Reviewer's suggestion, we modified our title to be: Electrically driven organic laser using integrated OLED pumping.We believe the new title precisely describes our work, conveys the indirect pumping approach used in the integrated laser, and leaves space for future publications on injection lasers. 2-2 The fundamental I-V-E QE characteristics are missing.The DC I-V-EQE and pulsed I-V-EQE should be provided.I think the breakdown behavior in both DC and pulse operation should be provided.In particular, I expect the unique rolloff behavior with short pulse operation.Maybe the comparison of different pulse duration provides a unique exciton deactivation mechanism from the aspect of joule heating, electrical field quenching, or triplets contribution. We have added example DC and pulsed J-V-EQE OLED data shown in the Response Fig. 2.1 (DC measurements were made on a device on a glass substrate, and nanosecond pulsed data using the PNPN substrate used in the integrated laser) into Extended Data Fig. 3.These show a significant roll-off in EQE under nanosecond pulse operation.We do not have breakdown data of the devices under different time scale operation but note that this is not needed to demonstrate lasing in our devices. Response Figure 2.1, Example DC (squares) and pulsed (circles) characteristics of the OLED used in the present study: a, voltage-current density and b, EQE-current density. 2-3 I think two important papers would be cited fairly in the document. CW Tang's first OLED paper, APL (1987) F. Hide's first polymer laser, Science (1997) We are happy to add these important papers to the first sentence of the introduction: Organic semiconductors consist of conjugated molecules which can be simply deposited by evaporation or from solution to make a range of electronic and optoelectronic devices 1-7,12-15 . 2-4 The refractive index values of each layer should be summarized, indicating the confinement of light in the OSL layer. We have added a graph showing refractive index spectra of the OSL materials to Extended Data Fig. 5, as shown in Response Figure 2.2. Response Figure 2.2, Refractive index spectra of the different layers in the organic semiconductor laser. 3-1 This study presents the first laser to be optically pumped by an all-organic incoherent source (that it itself electrically excited), but this does not make this device an electrically-pumped laser.Indeed, the title chosen for the paper is in my opinion misleading.In semiconductor laser physics, the term "electrical pumping" clearly refers to as a process in which the population inversion is created by the injection of a current into the device, which is not the case here.Although terms like "electrical injection" or "organic laser diode" are safely avoided, the term "electrically-driven" chosen in place of "electrically-pumped" contributes to make things unclear, as it suggests that the device is not indeed electrically pumped but rather just "driven" by electricity, which is obviously indeed the case of any laser at some point.I think this contribution would better be described as an indirect electrical pumping of an organic semiconductor laser by an organic light-emitting diode. We have changed the title of the paper to "Electrically driven organic laser using integrated OLED pumping". We believe the new title precisely describes our work, conveys the indirect pumping approach used in the integrated laser, and leaves space for future publications on injection lasers. 3-2 The paper is excellent and is written in an exquisitely simple, precise style.I wish the paper could contain more information about the device, notably its laser beam characterization.The device fabrication requires many steps, so it would be useful for any group willing to reproduce the data to have some practical information such as : how many devices were needed for one device to work?Were the devices reproducible? Integration of the OLED and OSL is an important process for the laser devices.PNPN-OLEDs and BBEHP-PPV lasers with similar performance were routinely made with moderate yield (around 60%).However, when integrated, some did not show the characteristics of laser emission.We observed lasing in 3 out of 14 integrated devices, giving a yield of 20%.Thus ~5 devices were needed for one device to work.We measured the spatial distribution of the beam for two of the three devices, and both showed beam-like output. Thus, we find that the lasing characteristics of the electrically driven organic lasers are reproducible, although threshold current densities differed between devices.This is probably because we assemble these devices manually and the applied pressure is not consistently controlled.This may explain why some devices did not show laser action.By improving the assembly process, the reproducibility may be improved in future.In addition, the available radiant exitance was very close to the threshold of the OSL.Small variations in the OLED performance will also contribute to the variation in the threshold currents. Following this suggestion, we have added information on the device yield in the 'PNPN-OLED Fabrication' in Methods as follows: Widths of the OLEDs are slightly different depending on the samples, 120-150 µm, while the lengths are similar (1 mm), probably due to the contact of the top shadow mask and the substrate.Thus, sizes of all samples were measured and used to estimate performance.PNPN-OLEDs with similar radiant exitance were made with a yield of around 60%. Also, we added sentences about the yield in the 'Laser sample Fabrication' in Method as follows: Finally, a parylene layer of 1.5 µm thickness was deposited by the parylene coater as the contact layer.Lasers with similar threshold were made with a yield of around 60%. Also, we added sentences in the same section as follows: The integration was conducted in a clean room to avoid inclusion of particles between the OLED and the organic laser, which was found important for the stable operation.Integration of the OLED and BBEHP-PPV laser is an important step in the fabrication that can affect the laser threshold current density.We tested 14 devices in this study and 3 showed narrow emission (yield of around 20%).We tested the spatial output of 2 of the devices, which both showed a similar beam-like spatial distribution of the output light as shown in Fig. 3.The low yield is probably due to the manual application of pressure in the integration, which was not consistently controlled.By improving the assembly procedure, the reproducibility may be improved. I have a few specific remarks or questions that should be addressed : 3-3 Line 42 : The introduction nicely puts the work in perspective and reminds that up to now almost all organic lasers were pumped by other lasers.The authors should however mention here their own work, cited hereafter as ref 27, on the indirect pumping of organic lasers by inorganic LEDs.This would better show how this study is a contribution in the reduction of the power density needed to operate organic lasers. We thank the reviewer for the suggestion.We now mention Ref. 27 (now Ref. 26 in the revised manuscript) in the last paragraph of the introduction. The gain medium is then excited by electroluminescence from the charge injection region 26 .This is conceptually similar to diode laser pumped solid state lasers, and to nitride LED pumping 26-28 , but here we achieve a fully integrated all-organic device.In this way we avoid the losses due to injected charges, greatly reduce losses due to triplets, and also reduce losses due to contacts. 3-4 Line 96 : The choice of a double layer of parylene and nanolaminates is not justified and the reader is sent back to ref 29 : a brief description of the reason of this choice, which seems very complicated at first sight, would be welcome The double layer of parylene and nanolaminates is used to give an enhanced barrier to oxygen and water compared with a single layer, as shown in Ref. 29.Following this suggestion, we added a description in the main text as follows: The OLED and its 'PNPN-substrate' were then removed from the glass carrier for transfer onto the organic laser waveguide.The two pairs of P and N layers were used to give a better barrier to oxygen and moisture than would be provided by a single pair (Ref. 30).The laser comprised a 230 nm thick BBEHP-PPV layer deposited on a distributed feedback grating; a 2.2 µm cladding layer of poly(vinyl-pyrrolidone) (PVPy) and a coupling layer of 1.5 µm-parylene were coated on the BBEHP-PPV to complete the laser section. 3-5 Line 121 : It is not clear why the high current density of 10 kA/cm² is rejected from the data of ref. 30 : is there a limit in current density that the device should not overcome, based for instance on degradation arguments ? We have examined Ref. 30 (now Ref. 32) and find current densities up to 4.5 × 10 6 mA/cm 2 , i.e., 4.5 kA/cm 2 , so no data are rejected from this paper.Please note that our comment on the need for 10 kA/cm 2 to achieve 50 W/cm 2 is not referring to data in this reference, but is our deduction of what would be required based on an extrapolation of the data in this reference. We have clarified how we refer to this paper on page 5 where text now reads: In reported OLEDs, radiant exitance around 20 W/cm 2 has been achieved at 4.5 kA/cm 2 with an external quantum efficiency of 0.2%. 31For this efficiency, a very high current density, over 10 kA/cm 2 , would be needed to give 50 W/cm 2 light output.To compound the difficulty of injecting such a high current density, we also need emission in the blue. 3-6 Line 147 : There should be a comment on the differences between short optical pulses obtained from an LED and an OLED : why is it easier to obtain shorter rise times compared to LEDs ? The GaN LEDs used in our previous work had a larger emission area of ~1 mm 2 ., and so increased RC time constant may have limited the rise time.We have added a comment on the area to the manuscript: We note that the obtained rise time is much shorter than the 6 ns rise time of the larger area (~1 mm 2 ) GaN LEDs used in our previous work 26,33,36 .3-7 Line 225 : as explained before, I don't agree with the term "electrical pumping" used here.Actually there is no reason that optical pumping by OPO or by OLED would be that different provided that they have similar durations and that the absorption is the same. We are using the term "electrically" in the text & Fig. 3c to distinguish between electrical and optical driving of our integrated device.We have changed the title of the paper to "Electrically driven organic laser using integrated OLED pumping" to convey more clearly that our device is indirectly pumped by an integrated OLED. We have also revised the term 'electrical pumping' to 'electrically driven' in the main text as well as in the caption of figure Fig. 3c. We use the comparison of OPO pumping and OLED pumping to estimate the irradiance on the polymer laser gain medium and hence the enhancement in OLED pumping by using the carefully integrated structure compared with the irradiance that could be achieved from a separate OLED pump. 3-8 Fig 1 : The 130 µm distance represented by the double arrow is not consistent with the direction shown for the DFB grating grooves, this is a little bit confusing We have revised Fig. 1b to make the grating grooves much smaller to avoid the confusion as shown in Response Figure 3.1.3.1, Revised Fig. 1. 3-9 Fig 2 : I think the characterization of the OLED could be more complete.In particular there should be an I-V curve obtained in CW conditions, and compared maybe with the same under pulsed operation. We are pleased to supply further OLED characterization.We measured the DC characteristics of the same OLED structure on a glass substrate and show these data (squares) in Response Figure 3.2.Under CW operation, however, the OLED cannot achieve the very high current densities that are required to drive the laser due to Joule heating and thermal breakdown.The pulsed characterization of the same OLED on PNPN substrates is also shown in Response Figure 3.2.The range of these measurements was limited by the pulse driver electronics only to high voltages/current densities, although these do cover the range of Fig. 2c relevant to driving the laser.We have added these data to Extended Data Fig. 3.The survey is covers vertical organic devices with two terminals, which achieve high current densities of more than 10 A/cm 2 .This includes both OLEDs and unipolar devices.It is not limited to pulse mode or to blue OLEDs.For clarity we revised the 'Literature survey of device performance and details of data collection' as follows: Response The data used to plot Fig. 2d and Extended Data Fig. 3 are summarized in supplementary information and references [ 26,31,47,55-110 ].We collated performance characteristics of two-terminal vertical organic devices which achieve a high current density of 10 A/cm 2 or more.We included information on OLEDs and unipolar devices.In some cases, peak wavelengths were not reported in the publication.In such cases, peak emission wavelengths were used from other publications reporting either EL spectra or PL spectra of the same emission layers.[3][4][5][6][7][8][9][10][11] although evidence of lasing is supplied without too many doubts, I find this laser characterization a little unsatisfactory.o Figs. 3 c and d do not really prove lasing, they just show a beamlike pattern that could be hidden in noise below threshold because of a lower power.A clear proof would be to show the photonic band diagram of the surface-emitting DFB laser, which should exhibit a collapse of both angular and spectral spread above threshold. We agree a collapse of both angular and spectral spread above threshold should be included.Indeed, the collapse of spectral spread was already included in Fig. 3a which shows a distinct photonic stopband in the surface diffracted emission when driven below threshold (2.0 kA/cm 2 ) and a spectral narrowing (down to 0.09 nm) when driven above threshold (4.9 kA/cm 2 ). We have adjusted the vertical scale of the far field emission plot in Extended data Fig. 6 to more clearly see the optical emission pattern below threshold.As shown in Response Figure 3.3, when the sample was driven below threshold (2.0 kA/cm 2 ), the emission from the sample showed a divergent spatial profile, superimposed with two brighter stripes that are characteristic of the fluorescence diffracted out of the waveguide mode of a surface-emitting DFB laser.With this detectable fluorescent background, it is clear no laser beam was observed at this peak current density.However, when driven at 4.9 kA/cm 2 (above threshold), a clear narrow divergence laser beam was observed.The evolution of far-field emission image and beam profile show clear evidence of the collapse of angular spread when driven above threshold.Note that the signal to noise of our measurement would readily show narrowed emission at 2 kA/cm 2 if a linear process such as fluorescence shaped by a grating or cavity were responsible for the pattern visible above laser threshold. Response Figure 3.3, Far-field emission image and averaged line beam profile of electrically driven laser below and above the threshold measured at 2.2 cm away from the device. To visualise the photonic band structure, we have also measured the angle-resolved photoluminescence spectrum of the laser sample.Response Figure 3.4 shows the photonic band structure of the laser sample.A photonic stop band was observed at around 541 nm.This result is consistent with the measurement in Fig. 3a.The laser wavelength of our electrically driven laser (542 nm) was located at the longer wavelength side of this photonic band edge which is typical in surface-emitting organic DFB lasers.We have also measured the polarization of the emission below threshold.The sample was pumped by OPO (450 nm, 20 Hz) below threshold (0.8 Pth).We observe that the fluorescence is partially polarized as the Reviewer mentioned.However, above threshold the polarisation ratio of the laser is much increased than the fluorescence as shown in the inset of Fig. 2b.This significant increase in polarisation ratio supports the evidence for laser action. Response We have added the below-threshold data to Extended Data Fig. 7 Response Figure 3.5 (revised Extended Data Fig. 7), The normalized intensity as a function of the linear polarizer angle when the sample is electrically driven above threshold, and optically pumped 20% below threshold, the blue curve is a least squares fit to a sin 2 (θ) function; 90° is defined as the polarisation parallel to the grating groove direction. 3-13 Is the solid line in fig 3b a guide to the eye?If so, is there a reason for the curve above threshold to be incurved downwards? The solid lines in Fig. 3b are linear fits to the data below and above threshold.We updated the figure caption to clarify this.The downward curve is due to the log scale of the axes, and is characteristic of the s-shaped curve of a laser power characteristic when plotted on a double-log graph.The plot on a linear scale is shown in Response Figure 3.6. Response Figure 3.6, Integrated lasing intensity as a function of peak current density (linear scale).The blue lines are linear fits to the data below and above threshold.We have added the following text on the measurement of the loss coefficient in the 'Characterization of film samples' in the Methods: To measure the waveguide loss, the pump beam was focused into a narrow stripe shape (2.3 mm by 130 µm) using a cylindrical lens.The end of the stripe was positioned near the edge of the waveguide.The pump stripe was moved away from the edge of the film and the emission from the edge was collected by a fibre-coupled CCD spectrometer.The emission intensity from the edge was fitted by = 0 (−) , where 0 is a constant intensity, is the distance of the stripe from the edge of the film and is the waveguide loss coefficient. trapped (and dissipated).The losses in the structure (integrated over the full emission spectrum) are 8% in evanescent modes, 14% absorption in the semi-transparent electrode, 16% in waveguide modes and the parylene layer, leaving 62% within the light cone entering the PVPy cladding of the laser waveguide.We assume that in the integrated structure all this light entering the PVPy layer may be used to pump the polymer laser.For the case of a separate OLED and organic laser, however, only the light within the light cone (inplane wavevector < 0.5) can be utilised to pump the laser.…In part b, stars show the values calculated for the PNPN-OLED with an additional 1.5 µm parylene and PVPy as the coupling layer and vertical lines show refractive indexes of the cladding layers at 430 nm.c, Calculated dissipation spectrum at 430 nm as a function of the normalized in-plane wavevector.The regions separated by the dashed lines indicate the regions of the device into which the emission is trapped (and dissipated).The losses in the structure (integrated over the full emission spectrum) are 8% in evanescent modes, 14% absorption in the semi-transparent electrode, 16% in waveguide modes and the parylene layer, leaving 62% within the light cone entering the PVPy cladding of the laser waveguide.We assume that in the integrated structure all of this light entering the PVPy layer may be used to pump the polymer laser.For the case of a separate OLED and organic laser, however, only the light within the air light cone (in-plane wavevector < 0.5) may be utilised to pump the laser.4-3 I'm surprised that the authors assumed a Lambertian pattern especially given the peaky output of the OLED, which is presumably due to the weak cavity formed by the anode and cathode.Why was it necessary to assume a particular emission pattern?One imagines that the emission pattern could be calculated. Response Following the Reviewer's suggestion, we calculated the irradiance falling on a surface illuminated by the PNPN-OLED using our out-coupling efficiency simulation.Response Figure 4.3 shows the new calculation, which is included as a revised version of Extended Data Fig. 1.The new calculation changes the peak irradiance at a distance 100 µm from the OLED from 55% of the peak radiant exitance (Lambertian profile) to 60% of the peak radiant exitance.We have consequently updated the text mentioning the calculated results in the 'Overview of integrated OLED': Emission from an OLED is normally highly divergent, so the irradiance decreases rapidly with distance.This is severe even over a short distance for OLEDs with a small active area; for the OLED of active area 130 µm × 1 mm used in this work, at a distance of 100 µm we calculate that the peak irradiance is reduced to only 60% of the peak radiant exitance of the OLED, while at 7 µm, the peak irradiance is higher than 99% (Extended Data Fig. 1).We therefore designed the OLED and laser waveguide to be separated by only 7 µm distance to maximise the excitation density in the gain material. Furthermore, we added the following sentences to 'Calculation of irradiance distribution' in Methods: In the simulation, the OLED was divided into 1 µm x 1 µm sections and POLED was normalised to a value of 1.The OLED emission pattern was calculated by using the same simulation as the out-coupling efficiency.We simulated the emission pattern of the PNPN-OLED emitting to parylene.4-4 It would be interesting to compare the modelling results (especially the predicted coupling between OLED & cavity) to the observed threshold. Response We have already presented a comparison of the modelled outcoupling efficiency (Ext.Data Fig. 2b) and the experimental results in section (6) Characterization of integrated laser.Using the assisted pumping measurement, we estimate the actual irradiance delivered to the organic laser from the OLED.Because the OLED and organic laser are very closely integrated, reduction in peak irradiance compared with the radiant exitance of the OLED is very small, less than 1%.Thus, the irradiance is expected to be same as the radiant exitance of the OLED emitted to the organic laser.By comparing this radiant exitance with the one measured for the OLED emitting to air, we estimated the enhancement of light coupling efficiency from emitting to air to emitting to the organic laser.We compared the enhancement with the calculated enhancement of outcoupling efficiency from emitting to air to emitting to the organic laser (Ext.Data Fig. 2b).To clarify our approach, we modified the text at the end of section 6: The measurements of electroluminescence in Fig. 2 show that for a peak drive current of 5 kA/cm 2 , 43 W/cm 2 is outcoupled into air.The higher equivalent power density of 95 W/cm 2 in the integrated device suggests that the coupling efficiency to the gain medium is enhanced by a factor of 2.4±0.3 (after taking account of differences in absorption between OPO and OLED excitation wavelengths), which is in good agreement with our calculated value of 2.3 for the coupling enhancement and demonstrates the benefit of our integrated device structure. 4-5 Readers may benefit from some context.The concept of a separated pump and lasing cavity has a long history in lasing.Diode pumping is used in many conventional systems today (especially high-power lasers?). Following the Reviewer's suggestion, we revised the text in the third paragraph of the introduction: In view of the difficulties outlined above, we have pursued an alternative approach in which we separate the region where charges are injected from the region where the laser population inversion is formed.The gain medium is then excited by electroluminescence from the charge injection region 26 .This is conceptually similar to diode laser pumped solid state lasers, and to nitride LED pumping 26-28 , but here we achieve a fully integrated all-organic device.In this way we avoid the losses due to injected charges, greatly reduce losses due to triplets, and also reduce losses due to contacts. 4-6 Finally, we have struggled since the early days of optically-pumped organic lasers to identify the unique advantages of organic EL lasers.The immediate impact of this work is to demonstrate that they are possible. The closing speculation about a few potential, and rather obscure, applications struck me as a potential distraction from the main result. We agree with the Reviewer's suggestion, and have removed speculation of future applications.Instead, we close by commenting on the implications for the field of organic optoelectronics: This advance in organic lasers requires the OLED to operate under exceptionally intense current injection to make a new type of very fast organic optoelectronic device.The microscopic physics of OLEDs under such intense, short-pulse operation has been little explored to date.We anticipate that our work will stimulate future studies to understand the dynamics of organic semiconductors in this regime that could lead to significant improvements in device performance and open up new applications of ultrafast organic optoelectronics. Reviewer Reports on the First Revision: Referees' comments: Referee #1 (Remarks to the Author): I am satisfied with the response and find the revised manuscript much more rigorous than the previous iteration. Referee #2 (Remarks to the Author): I thank the authors for providing the appropriate answers except for one issue.I think the pulse duration dependence would be quite important for understanding the exciton deactivation processes.Please supply it with possible discussion. Referee #3 (Remarks to the Author): The authors have thoroughly answered all the questions raised by the reviewers, resulting in an appreciably better manuscript.I am happy to follow the opinion of other reviewers in considering this top-quality contribution as suitable for a publication in Nature.However, regarding the main criticism made by almost all reviewers about the choice of the title, I still consider that the new title is not appropriate.The new proposition "Electrically driven organic laser using integrated OLED pumping" is misleading as the actual laser is not electrically pumped but rather optically pumped : it is of course "electrically driven", but exactly like basically all lasers are, except maybe some exotic sun-pumped lasers.Therefore, there is no reason to highlight the "electrical driving" of this laser, especially in the title, maybe except if the term goes with "indirect". As I already suggested in my first review, the term "electrical driving" is strongly associated with the idea of a population inversion obtained by electrical injection of carriers, and keeping the title as it is would maintain the ambiguity, which would not be totally honest."An OLED pumped organic laser" or "Indirect electrical driving of an organic laser by integrated organic LED" or something with a similar flavor would be better titles, I think.Referee #4 (Remarks to the Author): Fig.8|Operational lifetime of the electrically driven laser and OLED.a, Normalized laser peak intensity as a function of number of current pulses.b, Laser spectrum recorded at the start, and near the end, of the lifetime measurement.c, Normalized OLED peak intensity as a function of number of current pulses. Figure 1 . 2 , Calculated dissipation spectra at 430 nm as a function of the normalized inplane wavevector.The regions separated by the dashed lines indicate the regions of the device into which the emission is trapped (and dissipated). Figure 3 . 2 , Comparison of voltage-current density characteristics and EQE-current density of the PNPN-OLED and the OLED with the same organic layers but on the glass substrates.3-10 For fig 2d and extended data fig 3, it should be mentioned what are the papers, among the extended OLED literature, that were selected to make this literature survey.Was it limited to OLEDs emitting in pulsed mode (and if so, what was the limit of pulse duration ?), or was it limited to deep-blue OLEDs ? Figure 3.4, Angle resolved photoluminescence spectrum of the DFB laser when pumped below threshold which shows the characteristic anti-crossing of the waveguide modes due to the photonic stopband that gives rise to band edge lasing when pumped above laser threshold.(we note that the additional weak band observed in this figure is attributed to scattered light from another nearby grating on the sample during the measurement) 3-12 The polarization data in fig 3b should be compared with the fluorescence polarization recorded below threshold, as a diffractive structure like a DFB grating yields polarized luminescence branchs below threshold usually 3- 14 Extended data fig. 2 : there is probably a legend missing, I don't get what the dashed lines stand for We apologise for this error and have corrected Extended Data Fig. 2b as shown below in Response Fig. 3.7.The dashed line shows the calculated enhancement of the coupling efficiency into an outer medium of high refractive compared to emission into air.In the initial version the arrows were missing.The blue line indicates the situation for light coupling into the PVPy cladding of the laser.Response Fig. 3.7, Revised Extended Data Fig.2b.3-15 Extended data fig.4 : unless I did not read everything very carefully, I did not find experimental details relative to this loss measurement coefficient. Figure 4 . 2 , Calculated dissipation spectra at 430 nm as a function of the normalized inplane wavevector.In the figure, the four regions separated by the dashed line indicate different optical modes, i.e., outcoupled light to PVPy layer, confined light by parylene in PVPy interface (parylene), waveguided mode (WG), and the evanescent light.We have added Response Figure 4.2 into the Extended Data Fig.2, and the following text to the figure caption: Figure 4 . 3 , Calculated distribution of irradiance from a PNPN-OLED of width 130 µm and length 1 mm.Irradiance calculated at distances of a, 7 µm and b, 100 µm.The origin was set at the centre of the OLED active area and only positive directions were calculated due to the symmetry of the OLED along each axis.Hence only a quarter of the OLED is shown.White broken lines show the active area of the OLED. I have just another minor comment : My remark on fig1 was probably unclear ; I think the grating dimensions chosen in the first version were better for clarity than the smaller pitch chosen in the new version.My concern was rather about the dimensions of the device that appear in fig 1 : from Fig1c, Iunderstand that the DFB grating has dimensions 1 mm x 1 mm, but only a thin stripe of width 130 µm is active, the 130-µm dimension being parallel to the DFB grooves.But in fig.1a, the 130-µm dimension appears to be perpendicular to the grooves.There is something confusing about fig 1a, in particular the significance of the purple trapezoid (is it representing light from the OLED ?) Table 1 . 1 Summary of uncertainty in current density measurement for the estimate of 6.3 kA/cm 2 . Note:[1]Combined uncertainty of OLED size measurement which includes uncertainties caused by minimum pixel size, minimum scale to calibrate length in the microscope image, and variation in the repeating measurements.
15,254.4
2023-09-01T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Annales Geophysicae Higher order ionospheric effects in GNSS positioning in the European region After removal of the Selective Availability in 2000, the ionosphere became the dominant error source for Global Navigation Satellite Systems (GNSS), especially for the high-accuracy (cm-mm) demanding applications like the Precise Point Positioning (PPP) and Real Time Kinematic (RTK) positioning. The common practice of eliminating the ionospheric error, e.g. by the ionosphere free (IF) observable, which is a linear combination of observables on two frequencies such as GPS L1 and L2, accounts for about 99 % of the total ionospheric effect, known as the first order ionospheric effect (Ion1). The remaining 1 % residual range errors (RREs) in the IF observable are due to the higher – second and third, order ionospheric effects, Ion2 and Ion3, respectively. Both terms are related with the electron content along the signal path; moreover Ion2 term is associated with the influence of the geomagnetic field on the ionospheric refractive index and Ion3 with the ray bending effect of the ionosphere, which can cause significant deviation in the ray trajectory (due to strong electron density gradients in the ionosphere) such that the error contribution of Ion3 can exceed that of Ion2 (Kim and Tinin, 2007). The higher order error terms do not cancel out in the (first order) ionospherically corrected observable and as such, when not accounted for, they can degrade the accuracy of GNSS positioning, depending on the level of the solar activity and geomagnetic and ionospheric conditions (Hoque and Jakowski, 2007). Simulation results from early 1990s show that Ion2 and Ion3 would contribute to the ionospheric error budget by less than 1 % of the Ion1 term at GPS frequencies (Datta-Barua et al., 2008). Although the IF observable may provide sufficient accuracy for most GNSS applications, Correspondence to: Z. G. Elmas<EMAIL_ADDRESS>Ion2 and Ion3 need to be considered for higher accuracy demanding applications especially at times of higher solar activity. This paper investigates the higher order ionospheric effects (Ion2 and Ion3, however excluding the ray bending effects associated with Ion3) in the European region in the GNSS positioning considering the precise point positioning (PPP) method. For this purpose observations from four European tations were considered. These observations were taken in four time intervals corresponding to various geophysical conditions: the active and quiet periods of the solar cycle, 2001 and 2006, respectively, excluding the effects of disturbances in the geomagnetic field (i.e. geomagnetic storms), as well as the years of 2001 and 2003, this time including the impact of geomagnetic disturbances. The program RINEXHO (Marques et al., 2011) was used to calculate the magnitudes of Ion2 and Ion3 on the range measurements as well as the total electron content (TEC) observed on each receiver-satellite link. The program also corrects the GPS observation files for Ion2 and Ion3; thereafter it is possible to perform PPP with both the original and corrected GPS observation files to analyze the impact of the higher order ionospheric error terms excluding the ray bending effect which may become significant especially at low elevation angles (Ioannides and Strangeways, 2002) on the estimated station coordinates. 2000, the ionosphere became the dominant error source for Global Navigation Satellite Systems (GNSS), especially for the high-accuracy (cm-mm) demanding applications like the Precise Point Positioning (PPP) and Real Time Kinematic (RTK) positioning. The common practice of eliminating the ionospheric error, e.g. by the ionosphere free (IF) observable, which is a linear combination of observables on two frequencies such as GPS L1 and L2, accounts for about 99 % of the total ionospheric effect, known as the first order ionospheric effect (Ion1).The remaining 1 % residual range errors (RREs) in the IF observable are due to the higher -second and third, order ionospheric effects, Ion2 and Ion3, respectively.Both terms are related with the electron content along the signal path; moreover Ion2 term is associated with the influence of the geomagnetic field on the ionospheric refractive index and Ion3 with the ray bending effect of the ionosphere, which can cause significant deviation in the ray trajectory (due to strong electron density gradients in the ionosphere) such that the error contribution of Ion3 can exceed that of Ion2 (Kim and Tinin, 2007). The higher order error terms do not cancel out in the (first order) ionospherically corrected observable and as such, when not accounted for, they can degrade the accuracy of GNSS positioning, depending on the level of the solar activity and geomagnetic and ionospheric conditions (Hoque and Jakowski, 2007).Simulation results from early 1990s show that Ion2 and Ion3 would contribute to the ionospheric error budget by less than 1 % of the Ion1 term at GPS frequencies (Datta-Barua et al., 2008).Although the IF observable may provide sufficient accuracy for most GNSS applications, Correspondence to: Z. G. Elmas (isxzge1@nottingham.ac.uk)Ion2 and Ion3 need to be considered for higher accuracy demanding applications especially at times of higher solar activity. This paper investigates the higher order ionospheric effects (Ion2 and Ion3, however excluding the ray bending effects associated with Ion3) in the European region in the GNSS positioning considering the precise point positioning (PPP) method.For this purpose observations from four European stations were considered.These observations were taken in four time intervals corresponding to various geophysical conditions: the active and quiet periods of the solar cycle, 2001 and 2006, respectively, excluding the effects of disturbances in the geomagnetic field (i.e.geomagnetic storms), as well as the years of 2001 and 2003, this time including the impact of geomagnetic disturbances.The program RINEX HO (Marques et al., 2011) was used to calculate the magnitudes of Ion2 and Ion3 on the range measurements as well as the total electron content (TEC) observed on each receiver-satellite link.The program also corrects the GPS observation files for Ion2 and Ion3; thereafter it is possible to perform PPP with both the original and corrected GPS observation files to analyze the impact of the higher order ionospheric error terms excluding the ray bending effect which may become significant especially at low elevation angles (Ioannides and Strangeways, 2002) on the estimated station coordinates. Introduction After removal of Selective Availability in 2000, the ionosphere became the dominant error source in Global Navigation Satellite Systems (GNSS) error budget (El-Rabbany, 2002).The ionosphere is a medium of free electrons and ions and as such its dispersive nature makes the ionospheric Published by Copernicus Publications on behalf of the European Geosciences Union.Z. G. Elmas et al.: Higher order ionospheric effects in GNSS positioning refractive index different from unity -this difference in the refractive index leads to the group and phase velocities of the GNSS signals to differ from each other during propagation through the ionosphere, such that the group velocity decreases (leading to the group delay i.e. code measurements longer than the geometric range) and the phase velocity increases (leading to the carrier phase advance i.e. carrier phase measurements shorter than the geometric range). When the phase and group velocities are affected, the ray direction is also likely to be affected unless the wave is travelling perpendicularly to the gradient in ionospheric refractive index (Cairo and Cerisier, 1976).This effect, also known as the ray bending effect, is inversely proportional to the signal frequency and is highly dependent on the satellite elevation angle.The error due to the ray bending is orders of magnitude smaller than the first order ionospheric error; it is indeed comparable to that of the higher order error terms (Petrie et al., 2010). The diffractive and ray-bending effects of the ionosphere on GNSS signals are neglected in this work; only the part of the Ion3 term quadratic in terms of the electron density is analysed, although the third order error due to ray bending may, under some conditions, exceed or be comparable in magnitude to the second order error term (Hoque and Jakowski, 2008).This negligence of the ray bending effect assumes that the GNSS signals travel along the straight LoS path between the receiver and satellite (Hoque and Jakowski, 2007) instead of two slightly different paths (bent and LoS paths).This assumption (of neglecting a bent path) then implies that the TEC and geomagnetic field effect along the signal path are the same for different (e.g.L1 and L2) signal frequencies.The RINEX HO program does not estimate corrections for the bending effect thus the corrected (for Ion3) measurements will still be contaminated by the ray bending effect.It should also be mentioned at this point that for the estimation of the Ion2 term, the ionosphere is assumed as a single thin shell at 450 km and the magnetic induction B is computed at the ionospheric pierce point (IPP) and not along the ray path of the signal.This assumption is expected to introduce some errors to the computation of the Ion2 term however it has less computational burden than the ray path approach. Subsequently when positioning is performed with the corrected and uncorrected measurements an elevation cutoff angle of 10 • is considered in the PPP.As shown by Hoque and Jakowski (2007), the ray bending effect on the GNSS signals becomes significant especially at low satellite elevation angles -below 10 • ; thus this cutoff angle is thought to be an appropriate threshold for comparing the positioning results (considering the corrected and uncorrected observations) with negligible contribution by the ray bending effect.More detailed analyses about the impact of ray bending on the GNSS signal propagation and observations have been shown, among other researchers, by Hoque and Jakowski (2008), who provided an empirical formula for the geometric bending of the GPS signals; Hartmann and Leitinger (1984), who considered the geometric bending of the signals in their analysis of the RREs due to the atmosphere; and Petrie et al. (2010), who used the International Reference Ionosphere (IRI) 2007 model (Bilitza and Reinish, 2008) to estimate the potential size of the ray bending effect on the estimated GPS parameters and positioning. The different orders of the ionospheric error terms (Ion1, Ion2 and Ion3, as denoted in this work for the first, second and third order terms) can have magnitudes that are observed to change according to the background solar, ionospheric and magnetic conditions.At times of high background solar activity, as during the peaks of the solar cycle or active days of an ionospheric storm, the greater amount of solar radiation causes increased levels of electron density in the ionosphere.This can cause the slant range delay on the GPS L1 signal link to be as large as 100 m in the uncorrected observable (the error contributed by the Ion1 term is about 10 to 100 m in general as shown by Klobuchar andKunches, 2003 andGrewal et al., 2007).In the ionospherically corrected (to the first order as in the dual frequency applications) observable, RREs can reach tens of centimetres (the Ion2 term contributes about several centimetres of range error as shown by Bassiri andHajj, 1992, 1993;Kedar et al., 2003;Fritsche et al., 2005;Hoque and Jakowski, 2006;Morton et al., 2009) and the Ion3 term about 1 cm or less, e.g. during disturbed ionospheric background conditions, which may happen due to geomagnetic storms, as discussed by Bassiri and Hajj (1993), Brunner and Gu (1991) and Kedar et al. (2003). In general, most of the ionospheric range error can be eliminated depending on the method of positioning performed, i.e. stand-alone or differential.In stand-alone mode, users with a dual frequency receiver can account for the first order ionospheric effect by the IF observable, whereas users with a single frequency receiver can resort to an ionospheric model like the Klobuchar model (Leick, 2004).The IF observable can eliminate about 99 % of the total ionospheric effect (i.e. the Ion1 term) yielding an accuracy sufficient for most GNSS applications; an ionospheric model like the Klobuchar model, however, can correct about 50-60 % of the total ionospheric effect and gives limited performance for the users outside the mid-latitudes (Orus et al., 2002).In the differential mode, for short baselines, the ionospheric error can be eliminated by ionospheric corrections obtained from a reference station assuming a spatially and temporally correlated ionosphere (for such short baseline) between the user and reference.However, during adverse ionospheric conditions spatial and temporal correlation of the ionospheric errors can decrease. For high accuracy demanding GNSS applications like PPP and RTK, especially during the peaks of the solar cycle (and can be worse during geomagnetic storms), Ion2 and Ion3 need to be considered, as they can cause range errors of a few to tens of centimetres (Wang et al., 2005).The impact of the higher order ionospheric errors on the estimated station coordinates is studied by comparing the coordinates estimated by PPP performed with the original and corrected observation files, as done in this work. Section 1 of this paper gives an introduction of the work performed; Sect. 2 presents the literature review; Sect. 3 introduces the methodology for calculating the higher order ionospheric error terms by using the RINEX HO software and for PPP; Sect. 4 presents the results for the calculated values of Ion2 and Ion3, whereas Sect. 5 for the PPP results from both the original and corrected observation files.The paper concludes with discussion and suggestions for future work in Sect.6. Literature review Previous works related to the higher order ionospheric effects consider the ionospheric refractive index to derive the error due to the ionosphere and the Chapman theory for the layered structure of the ionosphere; they account for the influence of the geomagnetic field on the refractive index of the (anisotropic) ionosphere and some may neglect the differential (frequency and satellite elevation angle dependent) bending effect on the GNSS signals.Different authors consider these concepts differently to estimate the contribution of the higher order ionospheric effects to the GNSS error budget.In Wang et al. (2005) a multi-GNSS approach is taken to estimate the higher order error terms; and the authors focus on the techniques of eliminating/estimating the ionospheric errors through new linear combinations possible with the new signal frequencies of the modernized GNSS.Brunner and Gu (1991) observe that the RREs due to Ion2 and Ion3 in the dual frequency solution (i.e. using the IF observable) can reach several centimetres at low satellite elevations when the ionospheric electron density is as high as during the active period of the solar cycle.Their proposed model (using two separate Chapman functions to represent the top and bottom sides of the ionospheric electron density profiles) can eliminate the RREs to better than 1 mm by considering a series expansion of the ionospheric refractive index, an accurate ionosphere model (that provides the electron density as a function of height in the ionosphere), the International Geomagnetic Reference Field (IGRF) and also by accounting for the differential bending effect (important especially at low elevation angles) of the GPS signals (along with the tropospheric effect on the curvature of the GPS signals).Bassiri and Hajj (1993) propose an approach which can eliminate the RREs to the millimetre level by considering a series expansion of the ionospheric refractive index, a thin shell model for the ionosphere (as a sum of E, F 1 and F 2 Chapman layers), a tilted dipole model for the geomagnetic field and by neglecting the bending effect on the GPS signals (since they assume that the bending effect is insignificant for the satellite elevation angles greater than 30 • ).Ioannides and Strangeways (2002) show an analytical perturbation technique to determine the Ion2 and Ion3 terms for which they account for the ray bending effect, and the authors compare these results with those obtained from precise ray-tracing calculations for the GPS frequencies.They conclude that the refracted geometrical path increases compared with the LoS and there is a corresponding increase in the TEC with an associated phase advance.If the influence of the magnetic field for the L band signals is neglected then the total curvature error is of comparable magnitude with the increase in the geometrical path length related with the longer curved path but of opposite sign; this represents the phase advance.The authors thereafter suggest that both terms do not need to be determined since the total curvature error is of the same magnitude but opposite sign of the increase in the geometrical path.Kedar et al. (2003) focus on the impact of Ion2 on PPP by considering a co-centric tilted magnetic dipole and the GIM software (Global Ionospheric Mapping software from the Jet Propulsion Laboratory -JPL, 2010) which provides twodimensional electron density maps for the ionosphere taken as a thin layer at 400 km altitude.They use the satellite clock and orbit products not corrected for Ion2.In their comparison of the coordinate time series corrected for Ion2 with the original uncorrected coordinates, they find sub-centimeter level error contribution by the Ion2 term to the GNSS positioning error.Wang et al. (2005) present a triple-frequency method for correcting Ion2 and propose an ionosphere-free linear combination method based on three frequencies, claiming that their triple-frequency method can correct the effects to the millimetre level.Moreover, they derive a formula for Ion3 for which they apply the semi-empirical ionospheric model developed by Anderson et al. (1987) who define TEC as a linear function of the maximum electron density (N max ) in the ionosphere (Eq. 1) which gives N max as 4.405 × 10 −6 TEC; this agrees well with the linear interpolation N max ≈ 4.415 × 10 −6 TEC applied by Fritsche et al. (2005).After obtaining TEC from pseudorange (PR) measurements with L1 and L2, Wang et al. (2005) can estimate Ion3 with an accuracy of ±1 mm.Fritsche et al. (2005) investigate the impact of correcting the satellite orbits and Earth rotation parameters while estimating the station coordinates in a non-fiducial PPP approach using the Bernese GPS Software V5.0 (Bernese, 2007).Following the mathematical model of Bassiri and Hajj (1993) for Ion2 and Ion3 and using a thin shell model for the ionosphere, they apply GIMs for TEC data and a co-centric tilted magnetic dipole for the geomagnetic field.They show that both the station coordinates and the satellite orbits can change at the centimeter level when the corrections for Ion2 and Ion3 are applied.They emphasize that a consistent correction method for RREs should use the corrected GPS observations and products rather than the corrected observations without taking into account the RREs for the products.Hoque and Jakowski (2007) quantify the residual "phase" error due to Ion2 and neglect that due to Ion3 (differential bending of the GNSS signals also neglected) claiming that on a disturbed day (e.g. about 100 TECU) the RRE due to Ion3 is at the sub-millimetre level.Their model, which can provide better than 2 mm accuracy for GNSS users in Germany, does not require knowledge of the instantaneous geomagnetic field since they take the geomagnetic field component for a reference user position in central Germany.Knowledge about the electron distribution along the propagation path is also not required.These assumptions make the model applicable for real time GNSS applications in central Germany.Kim and Tinin (2007) use perturbation theory to study the residual error in the dual frequency ionosphere free observable; they investigate in particular the Ion3 term associated with ray bending effect on the GNSS signals penetrating through an inhomogeneous ionosphere.Taking into account that Ion3 term includes not only the quadratic correction due to the refractive index but also the correction for the ray bending effect, they show that the ray bending effect may dominate the Ion2 error contribution.They consider both the regular large scale and random small scale irregularities in the ionosphere such that the latter can, at times, cause residual error comparable to or greater than that of the Ion2 term, dominating the contribution to the residual error in the IF observable.Pajares et al. (2007) consider the impact of Ion2 on the satellites clocks and show that the estimates of the RREs on the receiver coordinates, satellite positions and clocks are correlated.Regarding the receiver positions, they infer that Ion2 has a sub-daily impact of less than 1 mm during March in 2001 -a year during the peak of the solar cycle.As for the satellite positions, they show that Ion2 causes a daily mean global southward displacement of several millimetres, depending on the ionization level in the ionosphere.Regarding the satellite clocks, which are most affected by the higher order ionospheric effects according to their results, RREs can cause deviations even larger than 30 picoseconds (corresponding to about 1 cm) depending on the latitude and local time of the receiver position.Datta-Brua et al. (2008) show that, unlike the Ion1 term which has the same magnitude but opposite signs for the group and carrier phase measurements, the Ion2 and Ion3 have different magnitudes and signs for these two types of measurements.For this reason, the authors claim that the higher order errors accumulate in the carrier smoothing of the IF (to the first order) code observable; they authors show that the errors in the carrier-smoothed code measurements are mostly due to Ion2 and Ion3.In other words the unaccounted higher order group errors contribute to the error in the carrier smoothing.Although can be neglected in many applications, these residual errors can be crucial in high precision applications.Pajares et al. (2008b) focus on Ion2 and different methods to obtain slant TEC (STEC) and the geomagnetic field com-ponent projected onto the receiver-satellite path.Considering the error due to Ion2 on the positioning, they emphasize that correction for Ion2 must be applied to all fiducial coordinates -application only to the unknown station (user) can lead to errors in the estimated coordinates that can be worse than applying no correction at all at the any receiver involved.Kim and Tinin (2011), in a more recent study, explore the possible ways of eliminating the higher order ionospheric error terms considering a multi-frequency GNSS approach and show how the GNSS accuracy can be improved considering the propagation of the signals through an inhomogeneous and random structure of the ionosphere.Through numerical simulation they show that the systematic, residual ionospheric error can be significantly reduced (under certain ionospheric conditions) through triple frequency GNSS.Moore and Morton (2011) focus on the Ion2 term introduced by the interaction between the GNSS signal and the magnetic field of the Earth.The anisotropy of the ionosphere causes the right hand circularly polarized (RHCP) GPS signals propagate in two (ordinary and extraordinary) modes, as a linear combination of them, depending on the angle between the GPS signal wave normal and the Earth's magnetic field.These two modes correspond to two different magneto-ionic polarizations each with a particular refractive index that needs to be considered in the Ion2 term.The authors show that near the geomagnetic equator signals arriving from the north propagate with the ordinary polarization (associated with left hand circularly polarized wave) yielding a "positive" Ion2 term for the carrier phase; and those arriving from the south propagate in the extraordinary mode polarization (associated with right-hand circularly polarized wave) making the Ion2 term "negative" for the carrier phase.A positive Ion2 term corresponds to the presence of error still to be accounted for in the (first order) ionospherically corrected measurements.The authors also point out a misconception in the work of Bassiri and Hajj (1993) who assume that the left hand circularly polarized (LHCP) component of GPS signals propagates in the ordinary mode and do not realize the fact that the RHCP signal component may indeed travel in either of the propagation modes.Moore and Morton (2011) show that the magneto-ionic polarization of the predominantly RHCP GPS signal depends on the direction of the GPS signal wave vector with respect to the magnetic field line.Considering three different geographic locations to show the influence of this fact on the Ion2 term, Moore and Morton (2011) show that Ion2 is asymmetric with respect to the geomagnetic equator such that depending on the magnitude of the angle between the wave vector and the magnetic field line, a RHCP wave has different propagation modes thus considering only one propagation mode is expected to lead to mismodelling inaccuracies in estimating the error due to the Ion2 term. From the literature review it can be understood that, while estimating the magnitudes of the errors due to the Ion2 and Ion3, higher accuracy can be achieved by (1) using a more precise geomagnetic field like the IGRM instead of the dipole model; (2) obtaining accurate estimates for the electron content along the signal link (STEC values) which can be either retrieved from Global Ionospheric Maps (GIMs) or estimated from PR measurements; (3) using the satellite and orbit products estimated by applying corrections for the higher order ionospheric error terms (this is particularly important for a systematic and accurate analysis of the impact of the higher order terms on PPP).Regarding point 2, vertical TEC (VTEC) data from GIMs can be converted into STEC by making use of a single layer mapping function in RINEX HO.Alternatively STEC can also be obtained from the PR measurements on the L1 and L2 frequencies (Eq.2); this, however, requires inputting the receiver and satellite Differential Code Biases (DCB rec and DCB sat , respectively) which were obtained from CODE for use in RINEX HO in this work.The uncertainty in any of the terms on the right hand side of (Eq.2) propagates into the calculation of STEC on a particular receiver-satellite link, according to the error propagation law (Eq.3).Although STEC can be estimated with a comparable accuracy using either GIMs or PRs (Marques et al., 2007), the non-availability of the DCB values at some instances hinders estimation of STEC from PRs. Therefore STEC is obtained from GIMs in RINEX HO in this work.Regarding the third point, since such corrected products were not available during the progress of this work, the satellite and orbit products estimated without corrections for the higher order error terms were used. Methodology The observation stations in this work are selected from the International GNSS Service (IGS, 2010) network, aiming for a reasonably good latitudinal (mid and high latitudes, including the auroral region) and longitudinal coverage in Europe (Fig. 1); the stations coordinates are provided in Table 1.Four sets of days (Table 2) are selected for analysis.In order to investigate the impact of the solar activity devoid of disturbances in the geomagnetic field, day-of-year (DOY) 312-316 in 2001 and 321-326 in 2006 were selected; for these periods, the planetary geomagnetic index, K p , is <4, which is a good threshold to exclude the influence of geomagnetic storms (NOAA, 2010).In order to include the influence of a more disturbed geomagnetic field, DOY 294-296 in 2001 and 301-307 in 2003 were selected, when K p was ≥4.Other geomagnetic indices like the AE (Auroral Electrojet index) or Dst (Disturbance Storm Time index) could also be considered (World Data Center for Geomagnetism -WDC, 2011) while selecting the days for analysis.However the K p index was deemed adequate, given the focus of this work on the mid-to-high latitudes. The range errors in the GPS observables due to Ion2 and Ion3 (on GPS L1 and L2 frequencies) were estimated and corrected using the software tool RINEX HO (Marques et al., 2011), developed at the Sao Paulo State University in Presidente Prudente, Brazil.The program requires as input the observation and navigation files (in the receiver independent exchange, RINEX, format), and GIM files or DCB information (according to the user's choice of the method for TEC calculation).An input text file is used, with the relevant file names and execution specifications (in this case the choice of the method to calculate TEC).The program applies the corrections to the GPS code and phase observables, corrects the input observation file accordingly and returns the output files (corrected observation file, corrections for Ion2 and Ion3 on L1 and L2 frequencies for code observations and STEC for each receiver-satellite link). The corrected observation file allows to perform positioning in order to compare the station coordinates estimated using the corrected and original observation files.This allows assessment of the impact of the higher order ionospheric effects on PPP, which is accomplished using the Bernese V5.0 (Bernese, 2007) software in this work.2 2 1 1 2 1 0 1 18 November 322 0 0 0 0 1 0 0 1 19 November 323 1 1 1 1 1 0 0 0 20 November 324 0 0 0 0 1 1 0 1 21 November 325 0 0 0 0 0 0 0 1 22 November 326 0 0 0 0 1 1 3 3 The code and the carrier phase GNSS observation equations are given in Eqs. ( 4) and (5), respectively.The ionospheric delay term (I f 1 ) appears with a "+" sign for the code delay (Eq.4) and with a "−" sign for the advance of the carrier phase (Eq.5).Focusing on the PR measurements (Eq.4), the ionospheric delay effect (I f 1 ) can be represented more explicitly as a series in inverse powers of frequency within the geometric optics (GO) approach i.e. considering the refraction of the GNSS signals penetrating through (large scale) electron density irregularities in the ionosphere.A series representation of the delay effect on the PR measurements, δρ g in Eq. ( 6), can be derived when the Appleton-Hartree equation (Budden, 1966) is considered for the ionospheric refractive index that is different than unity.Based on the GO approach, the Appleton-Hartree formula leads to the three terms on the right hand side of Eq. ( 7) which are the first (Ion1), second (Ion2) and third (Ion3) order ionospheric effects that delay the code measurement, respectively.The same order effects for the carrier phase measurements are given in Eq. ( 8).Hereafter the discussion considers the ionospheric effects on the PR measurements (Eq.7); the arguments are however applicable for the carrier phase range measurements taking into account the correct sign notation and coefficients for these three terms.N ds term in Eq. ( 6) is the integral of the electron density (N) along LoS between the receiver and satellite inside a columnar cylinder of unit cross sectional area such that the integration gives the (slant) total electron content, along LoS, STEC (1 TEC unit, 1 TECU, is 10 16 e − /m 2 .At times of low background solar activity, as during the quiet periods of the solar cycle, STEC is usually around 20-30 TECU at mid latitudes, corresponding to about 3-8 m range delay on GPS L1 frequency giving negligible RREs due to Ion2 and Ion3 terms under these conditions (1 TECU has about 0.16 m delay effect on GPS L1, Kintner Jr., 2006).STEC depends on the geometry of the receiver and satellite link, time of day (with a diurnal variation that attains a peak around local noon), time of year (seasonal dependency) and the solar cycle (greater STEC during peak of the solar cycle).The diurnal variation of STEC on the receiver-satellite links for GPS L1 can be seen for the observation stations in Figs. 2, 5, 8 and 11, for the different levels of solar and geomagnetic conditions.B 0 cosθ term in the Ion2 (Eq. 6) can be taken out of the integral assuming that it is LoS-independent, leaving N ds, i.e.STEC.Ion3 term contains the integral of the square of the electron density, N 2 ds (Eq.6), which can be estimated using the shape parameter η (Hartmann and Leitinger, 1984).This helps to approximate the ionospheric electron density profile in terms of the maximum electron density, N max , and the shape parameter, η, giving ηN max STEC for this integral.The shape parameter η can be taken as 0.66 which is valid for different satellite elevations and maximum electron densities (Hartmann and Leitinger, 1984).Based on these arguments, Eq. ( 6) can be written as Eq. ( 7).For the carrier phase measurements, Eq. ( 8) represents the ionospheric range errors (in meters) up to the third order, neglecting the bending effect that is associated with the third order error term Eq. ( 7), the magnitudes of Ion2 and Ion3 in the code-based measurements can be expressed as in Eq. ( 9) and Eq. ( 10), respectively.From Eqs. ( 7) and ( 8), it can be seen that the higher order ionospheric error terms for code measurements can be estimated from the carrier phase measurements, and vice versa, applying appropriate multiplicative terms (e.g.magnitude of Ion2 for code observations is twice as large as that for carrier phase observations) and sign notation for each order term.Due to the multiplicative terms it can be understood that the first order linear combination of PR observations does not eliminate the higher order error terms.It can also be seen (Eqs.7 and 8) that TEC along LoS is important for all the error terms (it should be reminded that the bending effect in Ion3 is neglected here).Moreover, due to the LoS dependency of TEC the magnitudes of Ion2 and Ion3 change according to the receiver-satellite geometry.These residual error terms become more important in the differential positioning mode (especially for long baselines when signal links pierce through comparably different parts of the ionosphere) and at low latitudes, in particular during high solar activity (when ionization in the ionosphere is expected to be greater). Accurate values of STEC for Ion2 and Ion3 can be obtained from (a) Global Ionospheric Maps, GIMs, which contain VTEC data accurate to about 2-8 TECU (Feltens and Schaer, 1998) or (b) PR measurements according to Eq. ( 2).These two methods show comparable accuracy (2-8 TECU) in the estimated STEC values (Marques et al., 2007).For the former method, VTEC from GIMs is converted into STEC using a mapping function, considering the IPP, given by the intersection of the receiver satellite path with the ionosphere assumed as a single thin shell at an altitude of 450 km (same value as taken in RINEX HO) according to the receiver- satellite LoS geometry.As stated by Pajares et al. (2008b), GIMs can provide less accurate STEC values at the low latitudes for low elevation satellites; yet, since in this work midlatitude stations and a cutoff angle of 10 • are considered, this is not of concern here.For the latter, there is the need for the receiver and satellite interfrequency biases (also referred to as Differential Code Biases, DCB rec and DCB sat , respectively).These frequency-dependent biases are relatively constant in time and must be input in RINEX HO.Within this work they were provided from the Center for Orbit Determination in Europe (CODE, 2010).Non-availability of these biases may halt the process of STEC estimation from PRs (Eq.2).Thus for continuity of calculations STEC values were obtained from GIMs (a user option in the program). As can be seen in Eq. ( 9), Ion2 depends on the projection of the geomagnetic field (B) onto the receiver-satellite link, (B 0 cosθ), which can be more accurately calculated if a precise geomagnetic field like the International Geomagnetic Reference Model (IGRM, 2010) is used instead of a dipole model.The GEOPACK library (Geopack subroutines, 2011) contains FORTRAN subroutines for computing the geomagnetic field in the Earth's magnetosphere, transforming between various coordinate systems and tracing along field lines (Tsyganenko, 2001).IGRM is used in RINEX HO for a physically more realistic and accurate modelling of the geomagnetic field. In Eq. ( 10) it can be seen that Ion3 depends on N max for which a linear interpolation can be used to approximate N max in terms of TEC (Fritsche et al., 2005).A modified version of this interpolation is also available (Piraux et al., 2010) where N max is redefined in terms of TEC (Eq. 11). The RREs (due to Ion2 and Ion3) are calculated for GPS L1 and L2 carrier frequencies in RINEX HO; however the Based on the equations for the higher order range errors (Eqs.7 and 8), a straightforward calculation (taking 150 TECU along line of sight, B 0 cosθ about 2.7 × 10 −5 Tesla at IPP taking B 0 as 3.12 × 10 −5 and θ as 30 • and N max as 4.416 × 10 −6 TEC) gives about 24 m, 2.3 cm and sub-millimeter (negligible which may be due to the fact that bending effect is excluded in this calculation) level range errors for Ion1, Ion2 and Ion3, respectively, for GPS L1.For such conditions, for instance, Ion2 is about 0.09 % of Ion1 for the code based range measurements (equivalently 0.05 % for carrier phase range measurements).In this case, using the IF observable to remove the Ion1 term would be accurate up to about 99.9 %; this agrees well with the estimation of Klobuchar (1987) that the IF observable is accurate up to about 0.1 %.However the crude values assigned to the parameters involved in Eqs. ( 7) and ( 8) should be kept in mind in this estimation. The final stage of the work presented here is to analyze the estimated station coordinates when processing the data in PPP, using the Bernese software (Bernese, 2007).This part of the work focuses on the impact of using the corrected (for Ion2 and Ion3) observation files in PPP in order to infer the significance of the contribution from the higher order ionospheric effects in GNSS positioning in the European region during different background physical (solar, geomagnetic, ionospheric) conditions. In the coordinates estimation process, Ion2 and Ion3 corrections are applied only to the receiver observations; the orbit and clock products (used in the Bernese software) in- volved in PPP are computed from a global network that does not apply corrections for Ion2 and Ion3.It must be noted that a systematic and accurate investigation of the impact of the higher order terms on PPP requires the use of satellite and orbit products estimated while accounting for these higher order error terms.As shown by Pajares et al. (2008b), a more consistent and correct approach to consider (e.g.Ion2 in PPP) can be to perform dual frequency, carrier phase differential positioning where both the orbits (and other satellite products) and user coordinates are estimated considering the Ion2 correction.According to Pajares et al. (2007) the effect of Ion2 on the satellite clocks can be larger than 30 picoseconds (1 cm in range equivalent units) and several millimetres on the satellite positions.different analysis centres which may consider either Ion2 or Ion3 or both.For instance, as of 2009 JPL has been considering only the Ion2 in their ionospheric model, whereas CODE considers both Ion2 and Ion3 for their products.As stated by Pajares et al. (2008a), using the standard products, which are not corrected for the higher order ionospheric effects, with the corrected GPS observations, blurs the net impact of corrections.However, since a set of standard satellite orbit and clock products are yet unavailable (Piraux et al., 2010), the JPL satellite and orbit products estimated without accounting for Ion2 and Ion3 were used in PPP in this work, where only the receiver positions are estimated based on data corrected for the higher order ionospheric effects. Results for the higher order error terms and the STEC values The results (RREs for Ion2 and Ion3, PPP station coordinate differences) are shown for Ion2 and Ion3 on the code observations for the GPS L1 frequency since this applies to a wider user community using the civil code on GPS L1.Due to the inverse frequency dependency of both Ion2 and Ion3, the calculated Ion2 values for GPS L2 are about 2.11 times and the Ion3 values for GPS L2 are about 2.71 times those obtained for the GPS L1. DOY 312-316 in 2001 These results are presented in Figs.electrons and ions tend to recombine reducing the amount of ionization, at the high latitudes night time ionization can continue due to the movement of the ionization from the daytime to the night-time part of the Earth as well as due to energetic particles arriving with the solar winds to the vicinity of the Earth and penetrating into lower altitudes of the ionosphere along the almost vertical geomagnetic field lines at these high latitudes.In the latter case, the particles moving vertically downward collide with the ionospheric particles causing collisional ionization.Such effect can be observed especially at the high latitudes where the geomagnetic field lines can route the particles and during the active period of the solar cycle when the solar radiation is stronger (Buonsanto, 1999). As seen in Eq. ( 9) Ion2 has a LoS dependency; thus for stations at different latitudes the values for Ion2 (Fig. 3) change from being more confined to about −1 cm (for TRO1) to scattering between ±2 cm (for MATE).The negative values for Ion2 (in all plots) are due to the B 0 cosθ term, which can attain positive or negative values depending on the satellitereceiver geometry.A mid-latitude station (e.g.MATE) can track the satellites with a wider range of elevation angles whereas a high latitude station as TRO1 tracks with a more confined range of elevation angles; thus the LoS dependency and thereafter the Ion2 errors are different for the receivers at different latitudes. In Fig. 4, it can be seen that the estimated Ion3 is significant during the noon-hours for all stations, ranging from about 2 to 3 mm from north (TRO1) to south (MATE).It can be observed that for the high latitudes, there can be significant correction for Ion3 at the night-time hours (see station TRO1 in Fig. 4). DOY 294-296 in 2001 These results are presented in Figs. 5 to 7. Comparing the results in Fig. 5 with those in Fig. 2, it can be noticed that the night time enhancement in TEC values can be as large as the noon-time values (e.g.station TRO1 in Fig. 5) when there are geomagnetic storms in addition to high background solar activity.This causes almost two peaks for the diurnal TEC values, especially for the high latitude stations.Movement of the ionization from the day side to the night side of the Earth at the high latitudes can be enhanced by the geomagnetic storms (Ho et al., 1997).As seen in Fig. 5 for ONSA, enhancement in the auroral TEC at the night time can be due to the expansion of the auroral oval by the influence of the geomagnetic storms during this peak year of the solar cycle as also evident in the high K p values.Such night-time enhancement in TEC is not apparent in Fig. 2 for ONSA.The mid-latitude stations are observed not to have enhanced levels of night time TEC, which means that the equatorward expansion of the auroral oval was not significant enough as to influence the ionosphere above these stations during these geomagnetic conditions. It can be seen in Fig. 6 that the error due to Ion2 is overall within 1-2 cm, which agrees to those observed in Fig. 3.However, due to the geomagnetic storms -which may cause enhancements in the ionization levels (i.e. higher TEC values), Ion2 values for ONSA can be noticeably large during the night-time hours as well -such enhancement is not apparent in Fig. 3 for ONSA.Also, the night-time values for Ion2 at TRO1 can be as large as the noon-time values (TRO1 and ONSA in Fig. 6).Since the B 0 cosθ term in Ion2 calculated by IGRM does not in general consider the actual geomagnetic disturbances, the enhanced values of Ion2 can be more correctly related with the enhancement in TEC.However it should be noted that TEC can increase due to the geomagnetic storms during such conditions.The similarity between the Ion2 error profile (Fig. 6) and the corresponding TEC values along the signal links (Fig. 5) suggests a strong dependency of Ion2 on STEC. Compared with Fig. 4, it can be seen in Fig. 7 that when geomagnetic disturbances are also considered the error due to Ion3 becomes significant during the midday hours for all stations, however this time the magnitude of the error ranges from about 1 to 5 mm from north (TRO1) to south (MATE).The noticeable night time peaks in Fig. 7 for TRO1 and ONSA suggest that for the high latitude stations, Ion3 may need to be considered during night-time hours as well, Overall it can be seen in both Figs. 4 and 7 that Ion3 is important during daytime, amounting to a few millimetres in the range errors at these stations during the peak of the solar cycle. DOY 301-307 in 2003 These results are presented in Figs. 8 to 10.Although within a post-peak year of solar cycle 23, the period DOY 301-307 in 2003 coincides with the so-called Halloween Storm, during which the solar radio flux index, F 10.7 (measure of radio emission from the sun at 10.7 cm wavelength correlating well with the sunspot number and used as an indicator of solar activity (Ionospheric Prediction Service -IPS, 2011)), went up to as high as 270/275 solar flux units (Space Weather Prediction Center -SWPC, 2011).It should be mentioned, however, that the change in the geomagnetic field inductance due to geomagnetic disturbances and the estimation of its influence on the Ion2 term is not straightforward. During this period, increase in the Ion2 error term is expected to be due to higher levels of solar activity as well as to the disturbances in the geomagnetic field.Similarly, increase in Ion3 can be associated with higher levels of solar activity during this period.Results show high levels of ionization During this period Ion2 ranges from about 1 to 2 cm (Fig. 9) and Ion3 from sub-millimetre to about 2 mm (Fig. 10) from TRO1 to MATE in both cases.Thus, even slightly outside the highest phase of the solar cycle, the solar activity can be strong enough to enhance the ionization in the ionosphere.During the high phase of the geomagnetic storm (DOY 301-302, 2003), Ion2 and Ion3 have significant enhancement; whereas during the absence of such storms (e.g.DOY 312-316, 2001) the higher order error terms are more predictable. DOY 321-326 in 2006 These results are presented in Figs.11 to 13.In 2006, it can be seen that during the quiet period of the solar cycle, ionization in the ionosphere is low, about 20-40 TECU (Fig. 11) and the corresponding higher order range errors are also less significant than those during the active period of the solar cycle: Ion2 is observed to be one order of magnitude smaller (at millimetre level as seen in Fig. 12 as opposed to centimetre level as observed during other analysis periods) and Ion3 is at sub-millimetre level (Fig. 13) during this quiet period of the solar cycle in absence of geomagnetic disturbances. Between the high and low periods of the solar cycle (November 2001 and November 2006, respectively, without the influence of geomagnetic disturbances in both cases) there is a significant difference in the calculated STEC values (e.g. as high as about 160 TECU during the peak of the solar cycle in November 2001 at the mid-latitudes, and about 40 TECU during the quiet period in November 2006 at the same latitudes).Significant differences are also observed in the Ion2 and Ion3 values -they change by an order of magnitude between the two periods, therefore indicating that the significantly different TEC values along the signal paths cause the different values for the higher order range errors during these two periods.This highlights the importance of considering the higher order range errors during the upcoming solar maximum, predicted for around 2013. During this quiet period of the solar cycle (i.e.characterised by low levels of ionization in the ionosphere), when geomagnetic field disturbances are negligible the high order ionospheric error terms should not degrade the measurement accuracy significantly. PPP results PPP is a high accuracy positioning method which can be performed with a dual frequency receiver (so that the IF observable can be used) and that exploits the use of highly accurate externally provided (e.g. by the IGS) satellite orbit and clock corrections (Bernese, 2007).Due to its high (potentially centimetre level) positioning accuracy, PPP was performed in this work with the Bernese software to investigate the impact of correcting the GPS range observations for the errors due Ion2 and Ion3.With the Bernese software, a free network solution can be carried out, i.e. a solution where the satellites orbits define the coordinates system to which the estimated positions refer to. As detailed before, RINEX HO applies corrections for Ion2 and Ion3 to the GPS observation files in the RINEX format and outputs a corresponding corrected observation file in the same format.In this work, PPP was performed respectively with both observation files, i.e. a file with the "uncorrected" and another with the "corrected" (for the higher order terms) observations.Figures 14 to 17 show how much the stations coordinates differ in both cases (in latitude, longitude and ellipsoidal height) for all four sets of days and stations being analysed.The differences in all three coordinate components are computed by subtracting the PPP results obtained with the original observation (uncorrected) files from those obtained with the corrected files (see Table 3 for the numerical values of the differences).The results are discussed below: -Considering the high solar activity period with negligible disturbances in the geomagnetic field (DOY 312-316 in 2001, Fig. 14) when the corrections for Ion2 and Ion3 account majorly for the impact of the solar activity, it can be observed (Fig. 14, top plot) that the high latitude stations get northward corrections (about 2-3 mm) and mid-latitude stations southward (about 1 cm).During this period all stations are observed to have westward corrections (about 1-2 cm) in general (Fig. 14, middle plot) and the vertical component of the station coordinates were greater (by about 2-3 cm) in general for the mid-latitudes and smaller for the high latitudes (Fig. 14, bottom plot) when the corrected observation files were used in PPP.Pajares et al. (2007) who focus only on the Ion2 term and its impact on the geodetic estimates show that applying Ion2 correction to subdaily differential positioning (using IGS data network) changes the receiver positions at submillimeter level which is northward for the high latitudes and southward for the low latitudes. -Considering the high solar activity period with disturbed geomagnetic conditions when K p was as large as 7 (DOY 294-296 in 2001, Fig. 15), it is difficult to decide on a general common trend for the latitudinal corrections (of few millimetres, see Fig. 15, top plot) in general (Fig. 15, top plot).During this period, the previously (Fig. 14, middle plot) observed westward correction seems to be suppressed.The estimated vertical corrections (Fig. 15, bottom plot) are overall upward however the mid-latitudes are corrected downward (at sub-cm level) on average.It should be pointed out that the short observation period considered here may hinder a more conclusive analysis. -During the post-peak period of the solar cycle with disturbances in the geomagnetic field, during the so-called Halloween Storm, (DOY 301-307 in 2003, Fig. 16), overall a southward correction (Fig. 16, top plot) can be deduced with magnitudes mostly at millimetre level but at times a few centimetres.A distinguishable feature during this post-peak period can be observed in the longitudinal corrections (Fig. 16, middle plot): the geomagnetic activity seems to suppress changes in the longitude component of the station coordinates.Regarding the estimated heights of the stations during this period, 1-2 cm level corrections can be observed (Fig. 16, bottom plot) such that the high latitudes are corrected downward and mid-latitudes upward. -Considering the period of low background solar activity with quiet geomagnetic conditions (DOY 321-326 in 2006, Fig. 17), it can be seen in the horizontal station components that PPP results do not show significant differences when the observation files are corrected for Ion2 and Ion3 (Fig. 17, top and middle plots).It is difficult to observe a general trend in the direction for the vertical corrections (Fig. 17, bottom plot). Considering that PPP can potentially provide centimetre level accuracy for the estimated station coordinates and that the corrections for Ion2 and Ion3 are about centimetre and millimetre levels, respectively, during adverse ionospheric and geomagnetic conditions, it can be expected that impact of Ion3 in PPP may be unnoticeable due to the noise level of the positioning.It can be expected that the differences in the estimated station coordinates (using the corrected and original observation files) would be mostly due to corrections for Ion2 and then for Ion3 and the noise level of the positioning solution. 6 Discussion and suggestions for future work Based on the analysis involved in this work the following points can be drawn: -Enhancement in TEC values can occur due to greater solar activity, as during the peak of the solar cycle.Similarly the geomagnetic field disturbances can also drive mechanisms that enhance ionization in the ionosphere.As TEC is an important parameter while calculating the range errors due to the ionosphere, more contribution from the ionospheric error terms is expected when the TEC values are higher.During the post-peak period of the solar cycle, if geomagnetic field disturbances are present (e.g.DOY 301-307 in 2003), the higher order ionospheric error terms were observed to contribute to the overall measurement accuracy at magnitudes comparable to those occurring during the peak of the solar cycle without the presence of such disturbances (e.g.DOY 312-316 in 2001).During the quiet period of the solar cycle (low ionization levels in the ionosphere), when the geomagnetic field disturbances were negligible (e.g.DOY 321-326 in 2006), the higher order ionospheric error terms were observed to be very small.Even during the post-peak years of the solar cycle, high TEC values may be observed which can be explained by the geomagnetic activity that can route the incoming solar particles to lower altitudes in the ionosphere, especially at the high latitudes where the geomagnetic field lines are mostly directed towards the surface of the Earth.Thus, enhancement in TEC should not be expected only during the peak years of the solar cycles; it can be observed in general correlated with increased levels of geomagnetic activity. -In terms of the effects of the Ion2 and Ion3 in longitude, a general westward correction was observed in this work (during active period of the solar cycle) where the mid-latitude stations were observed to be affected more than the high latitude ones and northward for the high latitudes.For the height component, the mid latitude stations are observed to be corrected upward in general and the high latitude ones downward. -Diurnal variation in RREs was such that a minimum was observed before sunrise and after sunset, with a maximum around noon for the stations analysed in Europe. The strong diurnal variation in TEC is expected to be a reason for this. -Had the corrected satellite orbit and clock products been used in PPP, a more systematic and realistic analysis of the differences in the positioning results could have been carried out.In the approach followed in this work, the net effect of correcting the observation files is expected to be obscured since the corrections were applied only to the receiver observations. -Comparing the Ion2 and Ion3 values presented in this work, the diurnal variation in Ion3 is stronger than that in Ion2 (i.e. the relative difference between the minimum and maximum for Ion3 and greater than that for Ion2).This can be explained by the fact that in addition to the dependence on STEC, Ion2 depends on the projection of the geomagnetic field onto the signal path and Ion3 on the maximum electron density in the ionosphere; in the former no diurnal variation is anticipated (IGRM consideration of TEC for the magnetic field is not there thus a clear relation between this magnetic term and TEC cannot be established), whereas in the latter diurnal variation can be anticipated since the maximum electron density normally reaches a maximum at about the local afternoon (Ratcliffe, 1972).-It was observed that both the horizontal and vertical components of the estimated station coordinates in PPP can differ at cm-mm level when the corrected (for Ion2 and Ion3) observation files are used.Differences in the estimated horizontal components can be influenced by not only the presence but also the strength of the geomagnetic field disturbances.It was also observed that when PPP is performed respectively with both the corrected and original observation files, the differences in the estimated station coordinates during the non-peak period of the solar cycle can be comparable to those during the peak period if the former has contribution from significant levels of geomagnetic field disturbances. -This work describes a method to analyze the higher order ionospheric effects on the GNSS observations and on positioning; future work will be carried out on the basis of this methodology, further accounting for the bending effect in the Ion3 term.Longer term periods will also be analyzed -monthly or yearly analyses for the higher order ionospheric effects can be performed with open-sky observations during the upcoming solar maximum, expected around 2013.Future work can also consider the new GNSS signals like GPS L2C, L5 and Galileo L1 and E5 which may be available by then.This can allow an assessment of the advantage of these new signals for the GNSS user community to correct range errors related to the ionosphere, especially its higher order terms that have so far been mostly ignored. Fig. 16."Corrected -Uncorrected" PPP results for the geodetic coordinates for the stations on DOY 301-307 in 2003. Fig. 16."Corrected -Uncorrected" PPP results for the geodetic coordinates for the stations on DOY 301-307 in 2003. Table 1 . Coordinates of the IGS stations considered for analyses in this work. Table 2 . Days used in the analyses (given as day-of-year, DOY) and the corresponding 3-hourly K p values for each day such that the first K p value corresponds to midnight 00:00 LT. These products can be obtained from Ann. Geophys., 29, 1383-1399, 2011 www.ann-geophys.net/29/1383/2011/ 2 to 4. The diurnal variation in STEC values can be seen in Fig. 2 for all stations.The midday peak values are greater at the mid-latitudes (e.g. at MATE) than at the high latitudes (e.g.TRO1).For the high latitude station TRO1, the night-time enhancement in TEC values can be as large as half the noon-time values.Although ionization due to solar radiation is absent at night and free Table 3 . Differences in the calculated station coordinates (delta height/latitude/ longitude, in meters) when PPP is performed with the corrected and uncorrected observation files.Positive differences in height, latitude and longitude are upward, northward and eastward, respectively.
13,438.6
2011-08-22T00:00:00.000
[ "Engineering", "Environmental Science" ]
A 4-trifluoromethyl analogue of celecoxib inhibits arthritis by suppressing innate immune cell activation Introduction Celecoxib, a highly specific cyclooxygenase-2 (COX-2) inhibitor has been reported to have COX-2-independent immunomodulatory effects. However, celecoxib itself has only mild suppressive effects on arthritis. Recently, we reported that a 4-trifluoromethyl analogue of celecoxib (TFM-C) with 205-fold lower COX-2-inhibitory activity inhibits secretion of IL-12 family cytokines through a COX-2-independent mechanism that involves Ca2+-mediated intracellular retention of the IL-12 polypeptide chains. In this study, we explored the capacity of TFM-C as a new therapeutic agent for arthritis. Methods To induce collagen-induced arthritis (CIA), DBA1/J mice were immunized with bovine type II collagen (CII) in Freund's adjuvant. Collagen antibody-induced arthritis (CAIA) was induced in C57BL/6 mice by injecting anti-CII antibodies. Mice received 10 μg/g of TFM-C or celecoxib every other day. The effects of TFM-C on clinical and histopathological severities were assessed. The serum levels of CII-specific antibodies were measured by ELISA. The effects of TFM-C on mast cell activation, cytokine producing capacity by macophages, and neutrophil recruitment were also evaluated. Results TFM-C inhibited the severity of CIA and CAIA more strongly than celecoxib. TFM-C treatments had little effect on CII-specific antibody levels in serum. TFM-C suppressed the activation of mast cells in arthritic joints. TFM-C also suppressed the production of inflammatory cytokines by macrophages and leukocyte influx in thioglycollate-induced peritonitis. Conclusion These results indicate that TFM-C may serve as an effective new disease-modifying drug for treatment of arthritis, such as rheumatoid arthritis. Introduction In the past decade, a series of potent new biologic therapeutics have demonstrated remarkable clinical efficacy in several autoimmune diseases, including rheumatoid arthritis (RA). In the case of RA, a chronic progressive autoimmune disease that targets joints and occurs in approximately 0.5 to 1% of adults, biologic agents, such as TNF inhibitors, have proven effective in patients not responding to disease-modifying anti-rheumatic drugs, such as methotrexate. However, about 30% of patients treated with a TNF inhibitor are primary non-responders. Moreover, a substantial proportion of patients experience a loss of efficacy after a primary response to a TNF inhibitor (secondary non-responders) [1][2][3]. More recently, as new therapies have become available, including biological agents targeting IL-6, B cells and T cells, it has become clear that a notable proportion of patients respond to these new biological agents even among primary and secondary non-responders to TNF inhibitors [3][4][5][6][7][8][9][10]. These individual differences in response to each agent highlight the difficulty and limit of treating multifactorial disease by targeting single cytokine or single cell type. Patient-tailored therapy might be able to overcome this issue, but good biomarkers to predict treatment responses have not yet been elucidated. Therefore, as described above, biological drugs have limited values. In addition, such drugs may be accompanied by serious side effects [11,12]. Furthermore, the high cost of these biological drugs may make access to these reagents prohibitive for the general public. Alternative therapeutic options, such as small molecule-based drugs, continue to be an important challenge. The involvement of prostaglandin pathways in the pathogenesis of arthritis has been shown in animal models by using mice lacking genes, such as cycolooxygenase-2 (COX-2), prostaglandin E synthase, or prostacyclin receptor [13][14][15]. As COX-2 knockout mice normally develop autoreactive T cells in collagen-induced arthritis (CIA) [13], prostaglandin pathways appear to be involved mainly in the effector phase of arthritis. However, treatment with celecoxib, a prototype drug belonging to a new generation of highly specific COX-2 inhibitors has been reported to have only mild suppressive effects on animal models of arthritis, and strong inhibition of arthritis was achieved only when mice were treated in the combination of celecoxib with leukotriene inhibitors [16][17][18][19]. In humans, although celecoxib is widely used as an analgesic agent in patients with RA or osteoarthritis, there is no evidence that celecoxib therapy modulates the clinical course of RA. In addition, recently it has been shown that celecoxib enhances TNFα production by RA synovial membrane cultures and human monocytes [20]. Celecoxib has been reported to exhibit COX-2-independent effects, such as tumor growth inhibition and immunomodulation [21,22]. Previously, we demonstrated that celecoxib treatment suppressed experimental autoimmune encephalomyelitis (EAE) in a COX-2 independent manner [22]. We recently developed a trifluoromethyl analogue of celecoxib (TFM-C; full name: 4-[5-(4-trifluoromethylphenyl)-3-(trifluoromet-hyl)-1Hpyrazol-1-yl]benzenesulfonamide), with 205-fold lower COX-2-inhibitory activity. In studies using recombinant cell lines, TFM-C inhibited secretion of the IL-12 family cytokines, IL-12, p80 and IL-23, through a COX-2-independent, Ca 2+ -dependent mechanism involving chaperone-mediated cytokine retention in the endoplasmic reticulum coupled to degradation via the ER stress protein HERP [23,24]. In the present study, we demonstrate that TFM-C inhibits innate immune cells and animal models of arthritis, including CIA and type II collagen antibody-induced arthritis (CAIA), in contrast to the limited inhibitory effect of celecoxib. TFM-C suppresses the activation of mast cells in arthritic joints. Moreover, TFM-C treatment suppresses the production of inflammatory cytokines by macrophages and leukocyte recruitment. These findings indicate that TFM-C may serve as an effective new drug for the treatment of arthritis, including RA. Materials and methods Differentiation and stimulation of U937 cells Human U937 cells were obtained from the American Type Culture Collection (Rockville, MD, USA) and cultured in RPMI 1640 supplemented with 10% FCS. To differentiate U937 cells, 5 × 10 5 cells were treated with PMA (25 ng/ml) for 24 hours. At 22 hours of PMA treatment, 50 μM of TFM-C was added for 2 hours. Subsequently, cells were stimulated with 5 μg/ml of LPS and PMA (25 ng/ml) for 0, 3, 6, 12 and 24 hours in the presence or absence of TFM-C. Supernatants were harvested and assayed for cytokine production by means of Quansys Q-Plex™ Array (Quansys Bioscience, Logan, Utah, USA). RNA isolation was performed following the manufacturer's instructions (Macherey-Nagel, Düren, Germany). Quantitative RT-PCR (qPCR) A total of 200 ng of RNA extracted from U937 cells was retrotranscribed to cDNA using random primers according to the manufacturer's protocol (Applied Biosystems, Carlsbad, California, USA). qPCR was performed with the Supermix for SsoFast EvaGreen (Biorad, Hercules, California, USA) on a 7500 Fast Real-Time PCR System (Applied Biosystems). For each target gene, qPCR QuantiTect Primer Assays were used (Qiagen Hilden, Germany). For each sample, expression levels of the transcripts of interest were compared to that of endogenous GAPDH. The levels of mRNA are calculated as 2 -Ct . Quansys Q-Plex™ Array chemiluminescent A total of 30 μl of medium from differentiated U937 cells treated with PMA/LPS/TFM-C or LPS/PMA were analyzed. Human Cytokine Stripwells (16-plex) were used following the manufacturer's instructions. The image was acquired using Bio-Rad Chemidoc camera and analyzed with Q-View Software (Quansys Bioscience, Logan, Utah, USA) DAPI staining Differentiated U937s were treated with LPS/PMA/TFM-C for 6, 12 and 24 hours and then fixed with 2% PFA. The cells were washed three times with PBS and then incubated with DAPI (1:50000; Molecular Probes, Carlsbad, California, USA) in PBS. Coverslips were embedded in Fluoro-Gel (Electron Microscopy Science, Hatfield, Pennsylvania, USA). Images were recorded using the ApoTome system (AxioVision, Carl Zeiss, Inc., Oberkochen, Germany) and analyzed using the ImageJ program (version 1.40, Bethesda, Maryland, USA). AlarmBlue staining of U937 cells The number of viable cells was tested at 6, 12, and 24 hours after TFM-C exposure by adding the AlamarBlue reagent (AbD Serotec, Cambridge, UK). Absorbance was measured at wavelengths of 570 nm and 600 nm after required incubation, using a Varioskan Flash (Thermo Clinical assessment of arthritis Mice were examined for signs of joint inflammation and scored as follows: 0: no change, 1: significant swelling and redness of one digit, 2: mild swelling and erythema of the limb or swelling of more than two digits, 3: marked swelling and erythema of the limb, 4: maximal swelling and redness of the limb and later, ankylosis. The average macroscopic score was expressed as a cumulative value for all paws, with a maximum possible score of 16. Thioglycollate-induced peritonitis Mice were injected with 1 ml of 4% sterile thioglycollate intraperitoneally. Four hours later, mice were killed and peritoneal lavage fluid was collected by washing the peritoneal cavity with cold PBS containing 5 mM EDTA and 10 U/ml heparin. Administration of TFM-C or celecoxib TFM-C and celecoxib were synthesized as previously described [23]. We injected TFM-C or celecoxib intraperitonealy (i.p.) in 0.5% Tween/5% DMSO/PBS. In CIA experiments, mice received 10 μg/g TFM-C or celecoxib every other day from 21 days after immunuization. In CAIA, we injected the mice with 10 μg/g of TFM-C or celecoxib every other day starting at two days before disease induction. In thioglycollate-induced peritonitis experiments, mice received 10 μg/g of TFM-C or celecoxib two days and one hour before thioglycollate injection. The control animals were injected with vehicle alone. Histopathology Arthritic mice were sacrificed and all four paws were fixed in buffered formalin, decalcified, embedded in paraffin, sectioned, and then stained with H&E. Histological assessment of joint inflammation was scored as follows: 0: normal joint, 1: mild arthritis: minimal synovitis without cartilage/bone erosions, 2: moderate arthritis: synovitis and erosions but joint architecture maintained, 3: severe arthritis; synovitis, erosions, and loss of joint integrity. The average of the macroscopic score was expressed as a cumulative value of all paws, with a maximum possible score of 12. Mast cells in synovium were visually assessed for intact versus degranulating mast cells using morphologic criteria. Mast cells were identified as those cells that contained toluidine blue-positive granules. Only cells in which a nucleus was present were counted. Degranulating cells were defined by the presence of granules outside the cell border with coincident vacant granule space within the cell border as described previously [25]. Measurement of CII specific IgG1 and IgG2a Bovine CII (1 mg/ml) was coated onto ELISA plates (Sumitomo Bakelite, Co., Ltd, Tokyo, Japan) at 4°C overnight. After blocking with 1% bovine serum albumin in PBS, serially diluted serum samples were added onto CII-coated wells. For detection of anti-CII Abs, the plates were incubated with biotin-labeled anti-IgG1 and anti-IgG2a (Southern Biotechnology Associates, Inc., Brimingham, AL, USA) or anti-IgG Ab (CN/Cappel, Aurora, OH, USA) for one hour and then incubated with streptavidin-peroxidase. After adding a substrate, the reaction was evaluated as OD 450 values. Stimulation of or macrophages B6 mice received 10 μg/g of TFM-C or control vehicle on Day 0 and Day 2, and on Day 3, splenic macrophages were collected and were stimulated by LPS in vitro in the presence of TFM-C or vehicle. Detection of cytokines Cytokine levels in the culture supernatant were determined by using a sandwich ELISA. The Abs for IL-1β ELISA were purchased from BD Biosciences (San Jose, CA, USA) and the ELISA Abs for IL-6 and TNFα were purchased from eBioscience (San Diego, CA, USA). Statistical analysis CIA and CAIA clinical or pathological scores for groups of mice are presented as the mean group clinical score + SEM, and statistical differences were analyzed with a non-parametric Mann-Whitney U-test. Data for cytokines were analyzed by an unpaired t-test. TFM-C inhibits cytokine secretion from activated U937 cells concomitant with induction of an ER stress response In a recombinant cell system, TFM-C inhibits IL-12 secretion via a mechanism involving the induction of ER stress coupled to intracellular degradation of the cytokine polypeptide chains via the ER stress protein HERP [23,24,26]. In order to verify whether the cytokine secretion-inhibitory effect of TFM-C extends to natural cytokine producer cells, we assessed its effect using PMA/ LPS-activated U937 macrophages, a well-known source of multiple cytokines. TFM-C potently blocked secretion of IL-β, IL-6, IL-8, IL-10, IL-12 and TNF-α ( Figure 1A, C). By means of QPCR, TFM-C was found to suppress mRNA production of IL-10 over the course of the experiment, and at 12 and 24 h of TFM-C treatment, of IL-1β. Virtually no effect was seen on mRNA production of TNF-α and IL-8, while TFM-C increased IL-6 mRNA between 6 and 12 h. To verify whether TFM-C induced an ER stress response in U937 cells, we measured mRNA of HERP and IL-23p19, both of which have been associated with induction of ER stress [24,26,27]. This showed significant up-regulation of both genes by TFM-C while the housekeeping gene GAPDH was not affected ( Figure 1D). Viability of U937 cells following exposure to TFM-C was assessed using two different methods ( Figure 1B), and showed a limited percentage of apoptotic cells not exceeding 15 to 20% following 12 to 24 h of treatment. Thus, TFM-C blocks cytokine secretion in natural producer cells by ER stress-related mechanisms that may involve repressive effects on both cytokine mRNA production as well as on post-transcriptional and -translational events involved in cytokine secretion, such as the ER-retention coupled to HERP-mediated degradation identified before for IL-12 [23,24,26]. However, of the TFM-C-sensitive cytokines identified in this experiment, IL-1β follows an unconventional protein secretion route involving exocytosis of endolysosome-related vesicles not derived from the ER/Golgi system [28]. Given its blockage by TFM-C, which can not be explained by partial suppression of mRNA levels only, this indicates that TFM-C may suppress secretion of cytokines via interfering with both conventional ER-dependent and unconventional ER-independent transit routes. TFM-C inhibits CIA First, we examined the effect of TFM-C on CIA induced by immunizing DBA1/J mice with type II collagen. As shown in Figure 2A, administration of TFM-C strongly suppressed the severity of arthritis compared with vehicle-treated mice (P-value, < 0.05 by Mann-Whitney Utest compared with control from Day 26 and Day 36.). In contrast, administration of celecoxib showed only a mild suppressive effect on CIA, which is consistent with a previous report [19] (P-value, < 0.05 by Mann-Whitney U-test compared with control at Day 29 and Day 31.) In addition to visual scoring, we analyzed the histological features in the joints of four paws from TFM-C-, celecoxib-or vehicle-treated mice 37 days after disease induction. Quantification of the histological severity of arthritis is shown in Figure 2B and typical histological features are demonstrated in Figure 2C. Arthritis was not apparent in joints treated with TFM-C ( Figure 2C, rightmost panel) compared to the severe arthritis with massive cell infiltration, cartilage erosion and bone destruction seen in joints of animals treated with vehicle ( Figure 2C, leftmost panel). Both the clinical scores and pathological features of arthritis were significantly less severe in TFM-C-treated mice (Figure 2A-C). The pathological features, including cell infiltration and destruction of cartilage and bone, were slightly less severe in celecoxib-treated mice even though there is no statistically significant difference compared to vehicletreated mice ( Figure 2B). We next examined anti-CII antibody in TFM-C-, celecoxib-or vehicle-treated arthritic mice. There was a trend for reduction in both IgG1 and IgG2a isotypes as well as total IgG anti-CII in TFM-C-treated mice compared to vehicle-treated mice ( Figure 2D), but the difference did not reach statistical significance. These results indicate that TFM-C possesses a potent inhibitory effect on CIA compared to vehicle or celecoxib. However, TFM-C treatment had little effect on CII-specific responses. TFM-C inhibits CAIA Although TFM-C treatment suppressed clinical and pathological severities of CIA, CII-specific antibody levels were not reduced by TFM-C treatment. Therefore, we hypothesized that TFM-C treatment may have a strong inhibitory effect on the effector phase of arthritis. To test this hypothesis, we examined the effect of TFM-C on CAIA induced by injecting a mixture of monoclonal antibodies against type II collagen (CII) followed by lipopolysaccharide (LPS) administration two days later. The major players in CAIA are innate immune cells while adaptive immune cells are not required for disease development. Therefore, CAIA has value as an animal model to study the effector phase of arthritis. In vehicle-treated mice, severe arthritis occurred one week after CII antibody injection, and administration of celecoxib inhibited arthritis slightly ( Figure 3A). In contrast, administration of TFM-C significantly suppressed CAIA compared to vehicle or celecoxib treatment. We next analyzed the histological features in the joints of four paws from vehicle-, TFM-C-and celecoxib-treated mice 12 days after disease induction. Quantification of the histological severity of arthritis is shown in Figure 3B and typical histological features are presented in Figure 3C. Massive cell infiltration, cartilage erosion, and bone destruction were observed in joints of vehicle-treated or celecoxib-treated mice but not in those of TFM-C-treated mice (Figure TFM-C inhibits the mast cell activation in CAIA Next, we sought to understand the mechanism through which TFM-C treatment suppressed arthritis in CAIA. Since mast cells have been demonstrated to be critical for initiation of antibody-induced arthritis [29], we evaluated the effect of TFM-C on the activation of mast cells. Because degranulation is the clearest histological hallmark of mast cell activation, joint mast cells were visually assessed for an intact versus degranulating phenotype after staining with toluidine blue. The proportion of degranulated mast cells was significantly lower in TFM-C-treated mice compared to that in celecoxib-or vehicle-treated mice ( Figure 4A, B). TFM-C supresses the activation of macrophages Innate immune cells and inflammatory cytokines, such as IL-1 and TNF-α are critical for disease development in CAIA [30]. Thus, we next determined the effect of TFM-C on the production of inflammatory cytokines from macrophages. Splenic macrophages from mice treated with TFM-C, celecoxib or control vehicle, were stimulated with LPS ex vivo, and the cytokines in the culture supernatants were measured by ELISA. The production of IL-1, IL-6 and TNF-α from macrophages was efficiently suppressed in TFM-C-treated mice compared to vehicle-treated mice ( Figure 5). In celecoxibtreated mice, although the production of IL-1β was decreased, the production of other cytokines such as IL-6 and TNF-α was not suppressed, and the IL-6 production was even enhanced compared to vehicle-treated mice. TFM-C suppresses leukocyte influx in thioglycollateinduced peritonitis The other key players in antibody-induced arthritis are neutrophils [31][32][33][34]. Neutrophils are recruited to joint tissue and depletion of neutrophils has been shown to supress disease susceptibility and severity in CAIA [35]. An intraperitoneal injection of thioglycollate causes leukocytes influx into the peritoneum from bone marrow and circulation, and neutrophils are the major cell population which first emigrate to the peritoneal cavity. To assess the effect of TFM-C on neutrophil recruitment, mice were treated with TFM-C, celecoxib or control vehicle, and thioglycollate was injected intraperitoneally. Leukocyte cell numbers in the peritoneal cavity four hours after thioglycollate injection were comparable between control and celecoxib-treated groups ( Figure 6). However, the peritoneal infiltrating cell numbers were reduced in mice treated with TFM-C, suggesting the suppressive effect of TFM-C on neutrophil recruitment. Taken together, these results indicate that the activation of innate immune cells, including mast cells, macrophages, and neutrophils, is suppressed in TFM-Ctreated mice but not in celecoxib-treated mice. Discussion In the present study we demonstrate, using arthritis models, that TFM-C, a celecoxib analogue with 205-fold lower COX-2-inhibitory activity, inhibits autoimmune disease. TFM-C differs from celecoxib by the substitution of the 4-methyl group by a trifluoromethyl group. This substitution drastically increases the IC 50 s for inhibition of COX1 (15 μM to >100 μM for celecoxib and TFM-C, respectively) and COX2 (0.04 μM to 8.2 μM, respectively), but does not affect the apoptotic index measured in PC3 prostate cancer cells, indicating independence between structural requirements for COX-2 inhibition and apoptosis induction [36]. Celecoxib perturbs intracellular calcium by blocking ER Ca 2+ ATPases, and this activity is shared with TFM-C [23,37]. In a HEK293 recombinant cell system, this Ca 2+ perturbation is associated with inhibition of secretion and altered intracellular interaction of IL-12 polypeptide chains with the ER chaperones calreticulin and ERp44, and results in the interception of IL-12 by HERP followed by degradation of the cytokine [23,24,26]. While IC 50 s for inhibition of IL-12 secretion by celecoxib or TFM-C are similar [23,24], in the present paper, we show that TFM-C inhibits production of various cytokines from activated macrophages (Figures 1 and 5) and exerts a strikingly stronger inhibitory effect on arthritis models compared to celecoxib. Given that the main biological difference between celecoxib and TFM-C resides in the extent of COX-1 and -2 inhibition, it is, therefore, likely that the less potent effect of TFM-C on COX1/2 inactivation is a contributing, disease-limiting rather than disease-promoting factor in these arthritis models. Indications supporting this concept come from a study showing increased LPS-induced macrophage production of TNF-α by inactivation of COX-2 with celecoxib [38]. Up-regulation of TNF-α by celecoxib was also reported in human PBMCs, rheumatoid synovial cultures and whole blood [20]. The relation between the anticipated extent of COX inhibition and production of TNF-α was observed in the present study ( Figure 5), where activated macrophages showed a tendency toward increased or decreased TNF-α production in the presence of celecoxib or TFM-C, respectively, compared to vehicle-treated cells. In this cell system ( Figure 5), celecoxib significantly increased production of the pro-inflammatory cytokine IL-6 while TFM-C suppressed it. Pending future mechanistic studies, this data indicate that prostaglandin-mediated suppressive effects, or other, as yet to be identified differential TFM-C/celecoxib-related effects on TNF-α production may extend to other cytokines as well, and provide an important clue as to the more potent beneficial effects of TFM-C compared to celecoxib in the arthritis models presented here. The suppression of antibody-induced arthritis, which requires innate but not acquired immune cells [29][30][31][32][33][34]39], suggests that TFM-C also inhibits the activation of innate immune cells while celecoxib does not. In fact, TFM-C suppresses the production of inflammatory cytokines from macrophages and the activation of mast cells as well as the subsequent recruitment of leukocytes. Mast cells are essential for the initiation of antibody-induced arthritis [29]. Moreover, mast cells are present in human synovia [40][41][42][43] and are an important source of both proteases and inflammatory cytokines, including IL-17, in patients with rheumatoid arthritis [42][43][44]. The clear difference between the effects of TFM-C and celecoxib on the suppression of mast cell activation could explain the differential impact of these compounds on arthritis models. Mast cells are important not only in arthritis but also in other conditions, such as allergy, obesity and diabetes [45]. Therefore, the suppression of mast cell activation by TFM-C may be applicable for the inhibition of these diseases in addition to autoimmune diseases. Cytokines and chemokines, such as TNF-α and MCP-1, produced by macrophages, are suggested to play important roles for neutrophil influx in thioglycollateinduced peritonitis [46]. Mast cells were shown to produce TNF-α, which recruits neutrophils into the peritoneum in an immune complex peritonitis model [47]. Thus, it is likely that TFM-C suppressed macrophages and mast cells produce such chemoattractants, which in turn inhibited neutrophil influx into the peritoneum. However, it is also possible that TFM-C directly suppressed neutrophil activation. Further studies are required to address this possibility. As described above, the major players in CAIA are innate immune cells, while adaptive immune cells are not required for disease development. Therefore, CAIA has value as an animal model for the study of the effector phase of arthritis. However, it is well known that adaptive immune cells play a significant role in the pathogenesis of RA and the strongest genetic link in RA is the association with HLA-DR, which is thought to present autoantigens to T cells. The activation of T cells and B cells is believed to initiate and/or enhance the effector inflammation phase of arthritis. In fact, massive infiltration of T and B cells is observed in RA synovium. Therefore, the ideal therapeutic agents for RA are those displaying the capacity to suppress both the induction and effector phases of arthritis. TFM-C treatment suppresses CIA, which requires both innate and adaptive immune cells for the development of arthritis. We previously demonstrated that celecoxib treatment suppresses EAE induced by immunizing B6 mice with myelin oligodendrocyte glycoprotein (MOG) peptide [22]. The suppression of EAE by celecoxib was COX-2 independent and was accompanied by reduced IFN-γ production by MOGreactive T cells. We observed a trend of reduced anti-CII antibody levels in serum upon TFM-C treatment. As TFM-C inhibited secretion of both recombinant IL-12 and IL-23 using a pIND ponasterone-inducible vector system in HEK293 cells [23,24], TFM-C treatment may have also influenced CII-specific immune responses by suppressing antigen-presenting cells. Specific inhibition of COX-2 has some adverse effects. Rofecoxib, a highly specific COX-2 inhibitor, was withdrawn from the world market because of an increased rate of cardiovascular events in patients with colorectal polyps [48]. Celecoxib was also shown to augment cardiovascular and thrombotic risk in colorectal adenoma patients, especially in the subgroup suffering from preexisting atherosclerotic heart disease [49]. Moreover, inhibition of COX-2 activity has been reported to exacerbate brain inflammation by increasing glial cell activation [50]. It has been suggested that the inhibition of COX-2-dependent prostaglandin I 2 from endothelial cells may be the major cause of thrombosis [51]. As the COX-2-inhibitory activity of TFM-C is 205-fold lower than that of celecoxib, the arthritis suppression by TFM-C appears to be independent of COX-2 inhibition. Therefore, TFM-C, which has strong immunoregulatory abilities but low COX-2-inhibitory activity, could serve as a new disease-modifying agent to prevent the progression of autoimmune diseases such as RA. Conclusions In summary, TFM-C, a trifluoromethyl analogue of celecoxib, inhibits arthritis despite the fact that TFM-C possesses very low COX-2-inhibitory activity. The most striking features of TFM-C are its inhibitory effect on the activation of innate immune cells and its suppression of arthritis compared to celecoxib. TFM-C treatment suppressed both CIA and CAIA by targeting innate immune cells, which are involved in both the induction and the effector phases of arthritis inflammation. Taking these data together, TFM-C may serve as an effective therapeutic drug for arthritis, including RA.
5,877.2
2012-01-17T00:00:00.000
[ "Medicine", "Biology" ]
Features of $\omega$ photoproduction off proton target at backward angles : Role of nucleon Reggeon in $u$-channel with parton contributions Backward photoproduction of $\omega$ off a proton target is investigated in a Reggeized model where the $u$-channel nucleon Reggeon is constructed from the nucleon Born terms in a gauge invariant way. The $t$-channel meson exchanges are considered as a background. While the $N_\alpha$ trajectory of the nucleon Reggeon reproduces the overall shape of NINA data measured at the Daresbury Laboratory in the range of $u$-channel momentum transfer squared $-1.7<u<0.02$ GeV$^2$ and energies at $E_\gamma=2.8$, 3.5 and 4.7 GeV, a possibility of parton contributions is searched for through the nucleon isoscalar form factor which is parameterized in terms of parton distributions at the $\omega NN$ vertex. Detailed analysis is presented for NINA data to understand the reaction mechanism that could fill up the deep dip from the nucleon Reggeon at momentum squared $u=-0.15$ GeV$^2$. The angle dependence of differential cross sections at the NINA energies above are reproduced in the overall range of $u$ including the CLAS data at forward angles in addition. The energy dependence of the differential cross section is investigated based on the NINA data and the recent LEPS data. A feature of the present approach that implicates parton contributions via nucleon form factor is illustrated in the total and differential cross sections provided by GRAAL and CB-ELSA Collaborations. I. INTRODUCTION Understanding the structure of hadrons based on QCD is a longstanding issue and hadron reactions at wide angles are useful means to investigate parton contributions to reaction processes. In search of parton distributions in hadrons the recent experimental activities and theoretical developments have been concentrated on identifying the scaling of cross sections for meson photo/electroproductions at mid angle [1][2][3][4][5] as well as the hadron form factors in terms of parton densities at large virtual photon momentum squared [6][7][8]. In photoproductions of lighter vector mesons ρ, ω, and φ, the peripheral scattering of mesons and Pomeron exchanges is suppressed at large angles, but an enhancement of quark exchange is expected due to the smaller impact parameter ∼ 1/ √ −t. Scaling of differential cross sections for hadron reactions with respect to energy is one of the examples that perturbative QCD predicts at mid angle θ ≈ 90 • [1]. Possibilities of such hard processes were examined in the CLAS experiment where the differential cross sections of ω photoproduction were measured over the resonance region 2.6 < W < 2.9 GeV up to a momentum transfer squared −t = 5 GeV 2 [9]. Around the mid angle θ ≈ 90 • the scaling of cross sections by s 8 in the photon energy E γ = 3.38 − 3.56 GeV seem to be consistent with quark and gluon exchanges predicted by the QCD-inspired model [10] as well as the Reggeized meson exchanges in the t-channel with the trajectory saturated at large −t [11][12][13][14]. At very backward angles θ ≈ 180 • beyond resonances, however, theory and experiment in this kinematical re- *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>gion are rare [9,10,[15][16][17][18] and only the data from the NINA electron synchroton at the Daresbury Laboratory [17] are available at present for the u-channel momentum squared −1.8 < u < 0.02 GeV 2 at E γ = 2. 8 -4.7 GeV. Because of the isoscalar nature of ω meson it is expected that the reaction at backward angles is dominated by the u-channel nucleon exchange with a dip at u = −0.15 GeV 2 arising from the nonsense wrong signature zero (NWSZ) of the N α trajectory. Interestingly enough, however, the measured cross section exhibited the dip much weaker than predicted by the Regge theory, and hence, a sort of a mechanism is needed to fill up the depth of the dip. While there are no other baryon trajectories to play such a role, the authors of Ref. [17] suggested a possibility of parton contributions there by showing the s 8 scaling of cross section over the dip in the analysis of data at E γ = 3.5 GeV. Recently this issue was reexamined in Ref. [15] to investigate backward ω photoproduction in the baryon pole model with hadron form factors considered. However, the discussion on the reaction mechanism around the expected dip as well as its appearance in the data was no longer valid in the model, because the occurrence of a dip is a unique feature of the baryon trajectory at the NWSZ in the Regge theory. Thus, the production mechanism of ω photoproduction at very backward angles remains not fully understood yet, and the topic raised by the NINA data should be revisited within the Regge framework for the u-channel nucleon exchange. In this work we study backward ω photoproduction with our interest in the search of hard process [10] involved in the NINA data [17]. Meanwhile, as the scaling of cross section for meson photoproduction is mostly due to quark and gluon dynamics through hard process in the midst of meson and baryon degrees of freedom [2,3], it becomes, therefore, an important issue how to consider parton contributions in the hadronic amplitude of the Regge theory for the present process. With these in mind, our purpose here is to find a way of considering parton distributions within the Regge framework for an understanding of parton contributions to the isotropic cross sections of NINA data observed in the very backward region. This paper is organized as follows; Section II devotes to a construction of photoproduction amplitude where the nucleon exchange is reggeized with the background contribution from the meson exchanges. Discussion is given on how to consider parton contributions in the hadronic production amplitude. In Sec. III numerical consequences in the differential cross sections are presented to compare with existing data. More proofs for the validity of the present approach to current issue are given in the differential and total cross sections in the region where the nucleon Reggeon plays a role. Summary and discussions follow in Sec. IV. II. BARYON REGGEON MODEL At backward angles where the u-channel momentum squared |u| is small hadron reactions are well described by the u-channel baryon Reggeon in the resonance region [19]. In this section we discuss a construction of photoproduction amplitude for the nucleon Reggeon because the isoscalar ω prohibits baryon resonances of isospin I = 3/2 from the γp → ωp reaction. In the reggeization of the relativistic Born terms the nucleon exchange in the u-channel alone is not gauge invariant due to the charge coupling term and the nucleon exchange in the s-channel is introduced further to preserve gauge invariance of the production amplitude. At higher energies beyond the resonance region, the meson exchange in the t-channel begins to give a contribution. To reproduce experimental data in the overall range of energy and angle, therefore, it is necessary to include the meson exchange as a background contribution. However, in order to avoid the possibility of double counting caused by the s and t-channel duality, if the t-channel is reggeized further in addition to the duality between s and u-channel by the u-channel Reggeon, we consider the t-channel meson exchanges in the pole model with the cutoff functions for the divergence of cross sections at high energies. A. Nucleon exchange reggeized in the u-channel For gauge invariance of nucleon exchange, as shown in Fig. 1, we now write the nucleon Born terms as where the electromagnetic and strong coupling vertices are given by where u(p),ū(p ′ ) are Dirac spinors of the initial and final nucleons with momenta p and p ′ , and ǫ µ and η ν * are polarization vectors of incoming photon and outgoing ω with momenta k and q. Charge and anomalous magnetic moment are e N = 1, κ N = 1.79 for proton and e N = 0, κ N = −1.91 for neutron. We take g ωN N = 15.6 by the universality of ω meson decay constant f ω , and κ ω = 0 for consistency with the results from other hadronic process γN → π 0 N , for instance [11]. The reggeization of the u-channel amplitude is simply done by replacing the u-channel pole (u − M 2 ) with the nucleon Regge propagator R N (s, u), and the reggeized amplitude is expressed as with R N (s, u) given by where the signature is defined as τ = (−) J−1/2 and τ = 1 for nucleon. s 0 = 1 GeV 2 . Given the baryon trajectory of the form for the spin J the MacDowell symmetry predicts the existence of the state with the same signature τ but the opposite parity. Thus, denoting α + ( √ u) = α − (− √ u) to distinguish the states between the same spin but opposite parities, the nonexistence of the parity negative state corresponding to the nucleon in the Chew-Frautschi plot should dictate 1 + e −iπ(αN (u)( √ u)−0.5) = 0 in order not to contribute to the reaction process, which leads to an occurrence of a dip with the NWSZ at some position of u. In accordance with many applications we take the slope α ′ = 0.9 GeV −2 . But the value for the intercept α 0 in literature varies in the range around −0.3. Here, we choose α 0 = −0.365 so that the trajectory in Eq. (7) yields the dip at u = −0.15 GeV 2 measured in the NINA data at E γ = 2.8, 3.5, and 4.7 GeV [17]. B. Meson poles in t-channel as a background Now that the nucleon Reggeon in the u-channel in Eq. (5) gives the contribution, in general, by an order of magnitude smaller than that of the π exchange in the t-channel, it is not enough to reproduce the cross section data by the nucleon Reggeon contribution alone in the overall range of angles. Therefore, we consider the contribution of meson exchanges as a background on which the nucleon Reggeon is based. As discussed above we treat the meson exchange simply as the t-channel pole with a cutoff functions and cutoff masses. For consistency with our previous work, we utilize the meson exchanges with coupling constants taken the same as in Ref. [14]. The meson exchanges in Ref. [14] are now given as the t-channel poles with the pole propagator, and the cutoff function of the type for ϕ = π, σ, and f 1 . As for the f 2 exchange, however, due to the highly divergent behavior despite the large cutoff mass we regard it as the t-channel Reggeon together with the Pomeron exchange in Ref. [14]. To fix the cutoff mass Λ ϕ in Eq. (9) we exploit the natural and unnatural parity cross sections over the resonance region. Before doing this, however, it should be cautioned that the determination of the sign of the π exchange relative to nucleon is of importance, because these two are the leading contributions to the reaction at forward and backward angles, respectively. Given the nucleon Reggeon M N in Eq. (5) the production amplitude for the exchanges of the natural and unnatural parity mesons consists of the following two terms, respectively. In Fig. 2 (a) by adjusting the cutoff Λ σ = 0.65 GeV we find that the coupling constant g ωN N = +15.6 gives a good fit to the natural parity cross section. By using Λ f1 = 1.3 GeV for f 1 and Λ π = 0.72 GeV with n = 2 to suppress the large coupling at γπω vertex a fair agreement is obtained as shown in Fig. 2 (b). For further confirmation of the coupling constants and cutoff masses chosen above we will provide differential and total cross sections at low energies in the following section for numerical consequences. We summarize the coupling constants and cutoff masses in Table I. 3) and (4). In (b) Solid curve is from π exchange with f1 of the 10 −5 order contribution. Data are taken from Ref. [20]. C. Parton contribution and scaling Since our interest is in the analysis of the reaction mechanism which is suggestive of the hard process at very small −u, we proceed to consider parton contributions in the hadronic amplitude in Eqs. (10) and (11). The differential cross section for the u-channel momentum transfer squared is defined as The NINA data from the reaction γp → ωp at E γ = 3.5 GeV are analyzed by a parameterization of dσ/du which is divided by hadronic and hard scattering parts, respectively [17], i.e., Then, a fit of data with the s 8 scaling assumed in the hard scattering term produces the B(u) isotropic at the relative angle φ ≃ 90 • for all u. This suggests incoherency between hadronic process and hard scattering [17]. More analysis [17] leads us to interpret the hadronic term as the Reggeon with energy dependence s α−1 , in which case the trajectory α(u) generates a typical dip at the expected position. The B(u) term in the hard scattering is regarded as parton contributions with s 8 scaling with respect to the momentum squared u. In order to include the parton contributions in the hadronic amplitude it is natural to suppose a possibility of parton distribution in nucleon form factors at very backward angles [7] in addition to the point coupling of ωN N vertex in Eq. (4), as depicted in Fig. 2. For this we introduce the isoscalar form factor F (s) (u) of the nucleon by the similarity of the ωN N vertex to a virtual photon coupling, γ * N N . Therefore, the ωN N vertex in the nucleon Reggeon in Eq. (5) is extended to include the additional coupling, i.e., with a relative angle e iφ(u) between the two coupling phases. As a result, the nucleon Reggeon is extended to include parton contributions via the nucleon isoscalar form factor, R N → R N + e iφ(u) F (s) (u) R N , and we write the full amplitude as in accordance with Eq. (13) with the two terms in the first bracket referring to hadronic and the last to parton contributions, respectively. The term M b.g. represents the background coming from all the meson exchanges in the t-channel, i.e., all terms excluding the nucleon term M N in Eqs. (10) and (11). In the additional Reggeon M N which has the parton contributions via the form factor, we further assume that the nucleon trajectory α N (u) be modified to α N (u) in the presence of parton dynamics at very small |u|. The relative angle e iφ(u) between hadronic and partonic phases is parameterized as a linear function of u φ(u) = (au + b) π 180 (16) with a in units of GeV −2 . The nucleon isoscalar form factor is composed of proton and neutron charge form factors [21] and in the parton model the quark contents of these are expressed as with the quark distribution [7] for the valence quarks u and d. Here, we favor to choose the ansats for the momentum fraction x dependence of partons which simulates the Regge trajectory with the slope α ′ in units of GeV −2 , because of the relevance to the present formalism. The momentum squared u 0 is the maximum value of u to avoid a rapid divergence in the region u > 0 and α ′ is considerd to be a parameter chosen for calculation. As to the valence quark q v (x) we employ the unpolarized parton distributions for u and d quarks which are at the input scale µ 2 = 1 GeV 2 , respectively. In the fitting procedure for the relative angle φ(u) and the trajectory α N (u) to NINA data we make the slope parameter adjusted to obtain α ′ = 0.3 for a better result. The sensitivity of proton isoscalar form factor F (s) (u) to the slope parameter α ′ is examined to discuss its implication to physical processes. In Fig. 3 the form factor with α ′ = 0.3 chosen here is compared to the case with α ′ = 1.105 that is obtained from the fit of proton Dirac form factor F p 1 to empirical data [22]. (We mean that the dependence of the form factor upon the momentum squared u is the same as the case upon the virtual photon momentum squared Q 2 .) Giving the contribution stronger than the case with 1.105, as shown, the choice of α ′ = 0.3 largely deviates from the empirical form factor with α ′ = 1.105 for on-mass shell. In an application to hadron reactions just as the ω photoproduction, however, in order to agree with the differential data at u = −0.15 GeV 2 the slope α ′ = 0.3 is favored rather than the case of 1.105. This is similar to the proton Dirac form factor F p 1 (Q 2 ) in electroproduction γ * p → π + n [22], in which case the dipole fit of the form factor with the cutoff mass Λ = 1.55 GeV is advantageous to agree with electroproduction data rather than the form factor with α ′ = 1.105. Our choice of α ′ = 0.3 yields the form factor which lies between the dipole form factor above and the one prescribed by Kaskulov and Mosel [23]. We regard these results to be feasible, because the form factor in hadron reactions is, in general, half off-shell which need not be the same as the on-shell one. III. NUMERICAL RESULTS In the fitting procedure to NINA data, our practical purpose is how to fill up such a deep dip as shown by the dotted curve that results from the nucleon Reggeon M N in Fig. 4 (a) and how to simulate the scaling behavior of the cross section with respect to the momentum squared u in (b) with parton contributions from the nucleon Reggeon M N in addition. However, there is no evidence for a dip in the cross section of (b) for which the additional Reggeon M N is applied, while it should produce a dip at the same place of u by the NWSZ of the trajectory α N (u), unless different from the α N (u). Thus, expecting not only the scaling without a dip in the hard scattering but also the reaction mechanism to fill up the depth of the original dip by a destructive inter-ference between hadronic and hard processes, we have to move the position of the dip by M N to another place by altering the trajectory α N in Eq. (14). Hence, we let the trajectory of the R N vary in the fitting procedure to obtain α N (u) = 0.9 u − 0.56 (22) with the intercept adjusted for the best fit to the NINA data in the overall range of energy and angle. The dip position of R N is now at u = +0.067 GeV 2 , and hence, not appearing in the kinematical region of the reaction u < 0.02 GeV 2 so that the resulting cross section in (b) could simulate the scaling without a dip, as shown by the solid curve. Given the differential cross section dσ/du for γp → ωp at E γ = 3.5 GeV separately into two parts as in Eq. (13), the hadronic contribution corresponding to the s α(u)−1 term and the s −8 scaling by parton contributions are shown in Fig. 4 (a) and (b), respectively. In (a) the solid curve results from the sum of the dotted curve by the nucleon Reggeon and the dashed one by the t-channel meson exchanges, i.e., from the hadronic amplitude M N +M b.g. in Eq. (15). As expected, the Reggeon M N produces the deep dip at u = −0.15 GeV 2 some part of which could be covered over with the meson contribution. Nevertheless, the dip remains not fully compensated yet. Moreover, the solid curve of the hadronic contributions M N + M b.g. is overestimating the data over 0.3 GeV 2 < |u|, which should be reduced. In (b) the solid curve is from the additional Reggeon, M N with the trajectory α N (u) = 0.9 u−0.56 with the dip at u = +0.067 GeV 2 so that the M N could preserve the cross section data without dip near u ≈ 0. On the other hand, this term, when combined with the hadronic part M N + M b.g. via the relative angle φ ≃ 90 • , should fill up the rest of the part of the dip as well as it should reduce the overestimating contribution of the solid curve to hadronic data in (a). Figure 5 shows the differential cross section dσ/du for γp → ωp when the cross sections (a) and (b) of Fig. 4 are combined with each other. The parameters a = −65 and b = 93 are chosen for the relative angle φ(u) between hadronic and parton phase of the reaction process in Eq. (13). In actual, these parameters correspond to φ = 0.57π at u = −0.15 GeV 2 which is close to φ ≃ 90 • from the decomposition of the NINA data in Fig. 3. The additional Reggeon M N with parton contributions vanishes over |u| > 2 GeV 2 . The respective roles of the Reggeon M N and the meson exchanges M b.g. in the tchannel are depicted by the dash-dotted and dash-dashdotted curves in order. The rapid increase of cross sections in the NINA data at large |u| are identified by the t-channel meson exchanges, while the Reggeon M N gives the contribution in the backward region |u| < 2.5 GeV 2 , as expected. We further note that these hadronic contributions, M N + M b.g. reproduce the shape convex up in the interval −3.5 < u < −2.5 GeV 2 which was once described by the two quarks exchange in previous work GeV. The dashed curve corresponds to the solid one in Fig. 4 (a). The solid curve from the full amplitude shows a good agreement with data in the overall range of u with the fit of a = −65, b = 93 for φ(u). Data are collected from Refs. [9] and [17]. Data over Eγ ≈ 3 GeV are taken from Ref. [17] and at low energies are taken from Ref. [18]. section in the lower panel with a = −35 and b = 120 for φ, respectively. These parameters at u = −0.15 GeV 2 , for instance, yield φ ≈ 0.47π and φ ≈ 0.7π, in order. Thus, the relative angle φ is not quite unique to energy dependence of cross sections and we have to change the parameters a and b to accomplish such an agreement with data at the photon energies E γ = 2.8, 3.5, and 4.7 GeV given. The contribution of meson exchanges as shown by the dash-dash-dotted curve is responsible for the forward enhancement of the cross section at large |u|, as before. Solid curve shows the improvement of hadronic contribution (dashed curve) by the additional Reggeon M N with parton contributions at backward angles. To test the validity of the present approach further we check up the energy dependence of differential cross sections in the range −1.8 < u < 0.02 GeV 2 and present the result in Fig. 7. The NINA cross sections dσ/du in eight angle bins are reproduced up to photon energy E γ = 5 GeV. In the differential cross sections at −0.1 < u < 0 and u = −0.15 GeV 2 the data recently measured at the SPring-8/LEPS facility [18] are included further for relevance to the present analysis. Since the angle φ has the energy dependence as shown in Figs. 5 and 6, we make parameterized a = 31GeV −1 E γ − 178. Data below 2 GeV are taken from Ref. [24] and data at 9.3 GeV from Ref. [20]. imal around the dip position at u = −0.15 GeV 2 and vanishes over u ≈ −1.7 GeV 2 , as expected. With an expectation that the exponent x of As −x term converted to an effective trajectory α(u), as indicated in Ref. [17], the role of new trajectory α N (u) with parton contributions is significant at u = −0.15 GeV 2 , although we do not follow the negative power of x assumed there. The analysis of Fig. 7 is consistent with the observation of Ref. [17] that the slope of the cross section at u = −0.15 GeV 2 is different from others. We obtain a reasonable result in cross sections that cover the overall range of energy and momentum of NINA data. These findings further support the validity of the present approach. Predictions for differential and total cross sections are presented in Figs. 8 and 9 with the parameters a and b in Fig. 7 for the angle φ. The differential cross sections are reproduced in the overall range of −t at the given energy E γ . In Fig. 8 each cross section at very backward angles reveals the role of the nucleon isoscalar form factor with partons to make the model prediction closer to experimental data even in the low energy region less than E γ = 2 GeV. Within the present framework it is rather natural that parton contributions via the form factor could appear at lower energies where the nucleon Reggeon is dominant. Such an evidence of parton distributions at low energies can be traced out in the total cross section as presented up to E γ = 6 GeV in Fig. 9. The nucleon isoscalar form factor with parton distributions plays the role to reduce the cross section. Thus, it is a feature of the present approach that the parton distribution could play a role through hadron form factors without deep scattering of virtuality photon at high energies. Together with nucleon Reggeon, a fair agreement with data on total and differential cross sections further confirms the validity of the background contribution with cutoff masses chosen for the present calculation. IV. SUMMARY AND DISCUSSIONS Backward photoproduction of ω meson off a proton target is investigated within the Regge framework where the nucleon Born terms in the s-and u-channels are reggeized for the gauge invariant u-channel nucleon Reggeon. The exchanges σ +π +f 1 +f 2 +P omeron in the t-channel are included as a background contribution to reproduce reaction cross sections. The cutoff masses for the cutoff functions for σ + π + f 1 poles in the t-channel are determined from natural and unnatural parity cross sections as shown in Fig. 2. While the N α trajectory of the nucleon Reggeon reproduces the overall shape of the NINA data measured at the Daresbury Laboratory in the range of −1.7 < u < 0.02 GeV 2 and energies at E γ = 2.8, 3.5 and 4.7 GeV, a possibility of parton contributions is searched for by considering the nucleon isoscalar form factor at the ωN N vertex which is parameterized in terms of parton distributions. The nucleon Reggeon would make a deep dip at u = −0.15 GeV 2 , which should be covered over by a fillup mechanism in order to agree with the cross sections observed in the Daresbury experiment. The dip of the nucleon Reggeon is covered over partly with the meson exchanges and partly with the additional Reggeon with partons at very small |u|. From the practical point of view a manipulation of the relative phase φ ≃ 90 • and the need of the t-channel contributions are of importance to reproduce the NINA data. Due to the parton densities in the nucleon isoscalar form factor, the parton contributions through the nucleon Reggeon activates rather in the lower energy region than is usually expected, as shown by the description of total and differential cross sections of GRAAL and CB-ELSA Collaborations. Confronting the impending 12 GeV upgrade of CLAS detector, it is timely interesting to observe in experiments how partons would manifest themselves in the midst of hadronic degrees of freedom and how we could understand such a phenomenology through theoretical analysis. In this work we have illustrated how to incorporate parton distributions with hadronic degrees of freedom in hadron models just as the Regge model and the prescription we favored here offers an intuitive way to consider parton contributions in hadron reactions via the hadron form factors. Together with the scaling of meson photoproduction by the factor s 7 around mid angle θ = 90 • observed in the Jefferson Lab, the NINA data at Daresbury, though almost a 30 years-old issue, provide information further for our understanding of quark dynamics inside hadrons in the limit −u ≈ 0 that we could expect to observe in future experiments at the 12 GeV upgraded CLAS as well as those facilities LEPS, CB-ELSA, and GRAAL.
6,769
2018-10-27T00:00:00.000
[ "Physics" ]
Effects of different nanoclay loadings on the physical and mechanical properties of Melia composita particle board Photo 1. Material use for the experience: Melia composita: a. Tree; b. Flower and fruit; c. Bark; d. Leaf and flower. e. Nanoclay: Cloisite Na. Photo (a, b, c, d) Rksrathore, Vijay, P. Grard. Melia composita Benth. [online] India Biodiversity Portal, Species Page: {name of species field}, Available at: http://indiabiodiversity.org/species/show/261673 [Accessed date Sep 11, 2017]. Photo (e) N. Ismita. N. Ismita1 Chavan Lokesh1 This study investigated the effects of adding a filler of nano-sized particles of Cloisite Na + (nanoclay) to urea-formaldehyde resin on the physical and mechanical properties of particle boards made with this resin.Cloisite Na + was introduced at rates of 2%, 4% and 6% of the dry mass of the resin.Density, water absorption (WA), thickness swelling (TS), modulus of rupture (MOR), modulus of elasticity (MOE) and internal bond strength (IB) were measured to evaluate the performance of the boards.Significant improvements were observed for TS, MOR and MOE when Cloisite Na + was added to the resin.More specifically, in samples bonded with UF resin and 6% nanoclay, 34% and 65% increases were observed in MOR and MOE respectively compared to the control boards. Introduction Uses of additives have high significance for wood composite industry.These materials are often used to alter the performance of adhesives for improving the end product.Adhesion promoters, fillers and tackifiers are common type of additives used in resins.Along with these traditional additives, nano-additives are also gaining attention in the field of wood composites.These nano-sized particles with large surface area have been observed to alter the properties of adhesives as well as composites.Nanofiller can significantly improve or adjust different properties of the materials into which they are incorporated, such as optical, electrical, mechanical, thermal properties or fire-retardants properties, sometimes in synergy with conventional fillers (Marquis et al., 2011).At present, much attention is being paid to investigate nanocomposite materials comprising layered silica clay.Nano-plate fillers can be natural or synthetic clays, as well as phosphates of transition metals.Claybased nanocomposites generate an overall improvement in physical performances.The most widely used ones are the phyllosilicates (smectites).They have a shell-shaped crystalline structure with nano-metric thickness.Clays are classified according to their crystalline structures and also to the quantity and position of the ions within the elementary mesh (Marquis et al., 2011).The most important characteristics pertinent to application of clay minerals in polymer nanocomposites are the richest intercalation chemistry, high strength and stiffness and high aspect ratio of individual platelets, abundance in nature, low cost and high gas barrier quality (Salari et al., 2012).It is found that slight percentage of nanoclay could improve the curing performance of the UF resin and physical and mechanical properties of wood and bamboo-based composites (Lei et al., 2008;Andrabi and Ismita, 2013).Nanoclay fillers have modifying effect on UF resin by decreasing resin viscosity, improving bonding and controlling its penetration into the wood tissues (Doosthoseini and Zarea-Hosseinabadi, 2010).Ashori and Nourbakhsh (2009) observed considerable improvement in mechanical and physical properties of boards when loaded from 2 to 6 weight % to the boards furnishes.Addition of Cloisite 30B resulted in significant improvement in mechanical properties of wood polymer composites (Samariha et al., 2015).However there is report of compactness of strips having more effect on oriented strand lumber when prepared with different concentration of UF resin along with nano-silane (Taghiyari et al., 2016a).One of the ways to decrease the formaldehyde emission of the UF boned panels is to use nano-fillers such as nanoclay and nano-SiO 2 because these materials have strong absorbability as well as high barrier property (Roumeli et al., 2012).Taghiyari et al. (2013a) observed that wollastonite nano-fibers contribute to bond formation between wood chips when applied; consequently they can improve physical and mechanical properties of particle board.Wollastonite has a high thermal conductivity coefficient that could accelerate the transfer of heat from hot-press plates and facilitate resin curing in the center of the mat (Taghiyari et al., 2013b(Taghiyari et al., , 2014)).Against the background, the present study investigates the effect of three concentrations of Cloisite Na + (nanoclay) on physical and mechanical properties of particle board made from Melia composita using Urea Formaldehyde resin. Materials and methods Melia composita tree was felled from the campus of Forest Research Institute, Dehradun.Small sections of it were manually converted into chips by sickle.Particles were prepared by passing these chips through Condux mill.Preferred dimensions of particles in cutter type are usually 0.2 to 0.4 mm in thickness, 10 to 60 mm in length and 3 to 30 mm in width.The moisture content of the material leaving the drier was ranged from 6 to 12 percent.The particles were then air dried to about 6% moisture content.The particles were sieved through 60 and 40 microns mesh screen of horizontally vibrating screen machine for removing oversized particles and to get particles of uniform size.Urea formaldehyde (UF) resin in dry powder form was procured form ARCL Organics Pvt. Ltd.The Cloisite Na + used in this study was procured from Connell Bros. Company (India) Pvt. Ltd.Mumbai.The specifications of the Cloisite Na + are listed in table I. UF resin was diluted to 40% solid content by mixing the resin powder with water.Nanoclay was added to urea formaldehyde resin in amounts 2%, 4% and 6% dry mass of resin.It was mixed with the resin using a mechanical blender at a speed of 4,500 rpm for 10 minutes.Ammonium chloride (NH 4 Cl) was then added as a hardener in the resin at a level of 2% based on the weight of resin and mixed well for 5 minutes.In preparation of particle boards dried particles were blended with UF resin in a rotating drum type mixer fitted with a pneumatic spray gun.Resin weighing 10% of the dry weight of the M. composita wood particles was used for spraying.Boards for each nanoclay loading and controls as reference boards were prepared.All boards were compressed at 21.0 kg/cm 2 at 150 ºC for 15 minutes.Particle boards were prepared in 21 x 21 inches dimensions by use of laboratory press and were conditioned for 2-3 days at room temperature before converting to samples.Six samples from control boards and nine samples for each test from nanoclay loaded boards were cut and tested according to Indian Standard 2380 (BIS, 1977). Results and discussion The physical and mechanical properties of particle boards are given in table II.One way ANOVA (analysis of variance) was performed to discern significant difference at 95% level of confidence.Duncan's subsets were formed to understand the individual comparisons wherever necessary. Density of the Boards Average density values of the boards without and with nanoclay loadings varied from 0.799 g/cm 3 to 0.8273 g/cm 3 .Statistical analysis of the density data revealed that the densities of boards with 2%, 4% and 6% nano loading and reference boards do not differ significantly (table III).This point to the fact that the composite manufacturing procedure has been performed precisely (Candan et al., 2015) and also that addition of nanoclay has not altered the density of the boards.This observation may be useful if addition of nanoparticles can contribute to improvement in other important properties without compromising on the density. Water absorption (WA) and thickness swelling (TS) Water absorption (WA) after 2 hours and 24 hours did not show any significant variations even compared with the controls (tables II and III).The values ranged from 38% to 46% for 24 hours, which is within the permitted values as per Indian standards 3087 (BIS, 1985).Hosseyni et al. (2014) also observed that adding different loading levels of Na + MMT into UF and MDI resin gave no significant effect in absorption properties. Thickness swelling (TS) during general absorption (GA) decreased considerably after addition of nanoclay into the resin.The mean values were statistically different for reference board and boards with nanoclay loadings (table III).Thickness swelling (GA) in boards with 2% loading showed 62% decrease compared to reference board.Two subsets were formed for general absorption by Duncan's grouping test with the control values occupying a separate group and lower values of nanoclay loaded boards occupying the other.Thickness swelling during surface absorption (SA) also depicted a similar trend as in general absorption.The decrease in thickness swelling was approximately 53% compared to reference board.The presence of nanoclay can obstruct the capillaries in wood which can contribute to control the thickness swelling of boards prepared with nanoclay added resins (Khanjanzadeh et al., 2012). It is pertinent to note that particular nanoclay can give different results with different resins as far as TS and WA are concerned.For instance WA and thickness swelling of wood composite was significantly reduced by adding nanomer 1.44 p to the resin; however nanomer PGV did not show effects on the stability of water absorption and TS of the same wood composite (Mamatha et al., 2013).Thus we see that addition of Cloisite has improved the swelling properties in spite of the fact that the water absorption properties of particle board did not improve due to nanoclay loading. Modulus of rupture (MOR) Bending strength is one of the most important mechanical properties of wood composite panels since it influences their structural performance for applications.MOR values of control and nanoclay loaded boards are within the values prescribed in IS 3087 (BIS, 1985).MOR showed significant improvement with nanoclay loadings in the present study (tables II and III) Maximum value for MOR (17.22 N/mm 2 ) was obtained for particle boards with 6% nanoclay loading and showed improvement of 34% compared to reference board, whereas 2% and 4% loading showed similar but higher MOR compared to the controls.Taghiyari et al. (2016b) reported improvement in MOR of medium density fiber board due to addition of nano-wollastonite (NW), another mineral material.The improvement was attributed to the strong adsorption of NW on cellulose surface.Salari et al. (2012) also observed that that MOR significantly improved by incorporation of organo-modified montmorillonite (MMT) up to 5%.Hosseyni et al. (2014) also reported 33% to 39% improvement in MOR of particle boards made with nanoclay mixed UF and MDI resins.The presence of nanoclay as fillers in wood composites facilitates the stresses to distribute throughout the material helping in improvement in the strength properties (Lei et al., 2008;Hosseyni et al., 2014). Modulus of elasticity (MOE) MOE of boards varied from 1201.74 N/mm 2 to 1987.51N/mm 2 , with reference board giving the minimum and boards with 6% nanoclay loading giving the maximum MOE (table II).The mean values of MOE does not fall within prescribed limits of IS 3087 (BIS, 1985).But table II reveals a systematic improvement in the MOE values.This is from about 16.6% for 2% loading to up to 65% for 6% loading.ANOVA of the individual values showed that the MOE values are indeed significantly different.It is important to note that by adding nanoclay does not have any deteriorating effect on MOE.Hosseyni et al. (2014) and Candan et al. (2015) wherein observed that no significant differences in MOE values of composites reinforced with different nanoclay at different loading levels. Tensile strength perpendicular to the grain (IB) Internal bond strength gives information on the resistance of the face layer to the separation from the core.IB values of reference and nanoclay loaded boards are within the values prescribed in IS 3087 (BIS, 1985).Maximum value for IB (1.988 N/mm 2 ) was obtained for particle board with 2% nanoclay loading which accounted for an improvement of 18.03% compared to reference board (table II).Internal bonding strength seems to decrease at 6% nanoclay loading but it is not lower than reference board.Statistically no significant difference was observed in the IB values of reference boards and nanoclay loaded boards.Lei et al. (2008) observed that highest improvement in IB occurred with the 2% Na + MMT loading after which the improvement was rather small up to 8% Na + MMT, they suggested percolation theory to the networking capability of the clay nano-platelets to provide an explanation for improved behavior of the UF resin.Hosseyni et al. (2014) reported that adding Na + MMT accelerates the adhesion and strengthens transverse cohesion strength of UF.Since IB is one of the quality parameters for particle boards, it is safe to say addition of Cloisite Na + is not detrimental to the boards. Conclusions Melia composita particle boards prepared using urea formaldehyde (40% solid content) mixed with different loading levels of Cloisite Na + (2%, 4% and 6%) showed that in most of the parameters boards meet the minimum requirement as per the IS: 3087: 1985.Thickness swelling and modulus of rupture showed significant improvement with addition of nanoclay.The boards with 6% nanoclay content showed the maximum values for modulus of rupture (MOR).Modulus of elasticity (MOE) and Tensile strength perpendicular to the grain (IB) showed no effect due to nanoclay loading.Based on the findings of the study, it can be concluded that nanoclay loading has a positive and significant effect on thickness swelling (TS) and MOR values of boards without compromising with water absorption, elasticity and internal bond strength of the board. Table I . Specifications of Cloisite Na + . Table II . Physical and mechanical properties of particle boards at different concentrations of nanoclay.Different alphabets indicate different significance levels.TS (GA): Thickness swelling during general absorption; TS (SA): Thickness swelling during surface absorption; WA: Water absorption; MOR: Modulus of rupture; MOE: Modulus of elasticity; IB: Tensile strength perpendicular to the grain. Table III . ANOVAs for physical and mechanical properties.
3,099.6
2017-01-01T00:00:00.000
[ "Materials Science" ]
Analysis of factors affecting the fuel economy of ice when operating a hydrocarbon fuel activator The article examines the issue of the influence of a hydrocarbon fuel activator on the fuel consumption by the internal combustion engine when the activator is installed in the fuel system when the car is running. The analysis of the previously performed work was carried out, hereupon the installation of a hydrocarbon fuel activator was identified as the parameter influencing the fuel consumption of a vehicle. The indicators that require accounting the rate of fuel consumption when the hydrocarbon fuel activator is installed, have been determined. Introduction In modern conditions, hydrocarbon fuels and lubricants became a strategic resource, that is why measures related to saving consumption during vehicle operation, have become especially relevant. On the one hand, when designing cars, manufacturers need to ensure compliance with the environmental requirements which become tougher each year. Current trends in the transfer of all land vehicles to electric traction do not solve the problems that consumers face when operating electric vehicles: the lack of a developed infrastructure for electric refueling, climatic conditions, and the time spent on charging batteries. On the other hand, the constant increase in the cost of hydrocarbon fuel, environmental standards, leads to an increase in the cost of not only the operational cost of cars equipped with an internal combustion engine, but also an increase in the cost of their design and production. comparison, comparison of the obtained facts, abstraction, the method of comparative analysis and generalization -in the study of practical test results. The research methodology is based on test methods (programs) were developed in accordance with generally accepted standards for similar tests in Russia at the stage of prototyping. The purpose of the test is to obtain accurate and reliable results from performance tests in the study of engine fuel economy; Results Current accounting for fuel efficiency varies considerably across countries. For example, in the OECD (Organization for Economic Cooperation and Development) member countries, the average fuel consumption in 2005 was set at 8 liters / 100 km for new passenger cars. In the USA, the average fuel consumption of cars and light trucks is slightly higher than 9 liters / 100 km. In non-OECD countries, there is no clear data on average fuel efficiency. Of course, these numbers are very relative, since they depend on many factors. [1] A number of international organizations: the Federation Internationale de l'Automobile (FIA), the International Energy Agency (IEA), the International Transport Forum (ITF) and the United Nations Environment Program (UNEP) have launched the global initiative to improve vehicle fuel efficiency "50 × 50. Global Initiative to reduce fuel consumption" includes 11 provisions and is aimed at reducing the fuel consumption of cars. It is assumed by 2050 to reduce the consumed fuel in the world by at least 50% compared to the current level of fuel efficiency. [2] Abstract Over the past 25 years more than 20 major studies have examined the technological potential to improve the fuel economy of passenger cars and light trucks in the United States. The majority have used technology/cost analysis, a combination of analytical methods from the disciplines of economics and automotive engineering. In this review we describe the key elements of this methodology, discuss critical issues responsible for the often widely divergent estimates produced by different studies, review the history of this methodology's use, and present results from six recent assessments. Whereas early studies tended to confine their scope to the potential of proven technology over a 10-year time period, more recent studies have focused on advanced technologies, raising questions about how best to include the likelihood of technological change. The review concludes with recommendations for further research [9]. In the Russian Federation, at the design stage by manufacturers, fuel efficiency is calculated on the basis of the "Methodological Recommendations for the Consumption of Fuels and Lubricants" of the Ministry of Transport of the Russian Federation as the total fuel consumption in liters, referred to the distance traveled in kilometers, depends on the operating mode of the vehicle calculated by the formula: Qн = 0.01 Hs S (1 + 0.01 x D), where Qн -fuel consumption rate, l; Hs-basic rate of fuel consumption per 100 km, (l / 100 km); S -vehicle mileage, km; D -correction factor (total relative increase or decrease) to the norm, %. The fuel consumption rates published by the Ministry of Transport of the Russian Federation for 2019 are presented in Table 1 by brands of cars participating in the experiment with the main technical characteristics of the engine and transmission, type of fuel and lubricants. To obtain more accurate values, a number of experiments were carried out to study the characteristics of vehicles equipped with a fuel activator in Rostov-on-Don. Based on previous studies, it was found that fuel economy is obtained when the fuel activator is installed in the fuel system. The results are presented in Table 2. The tests were carried out on passenger cars of different brands, under different climatic conditions, different settings of the hydrocarbon fuel activator, different brands of fuel, outside city traffic. The following was unchanged: the route was 17.7 km, the average speed was 76 km / h, the time to complete the route was 14-15 minutes. Based on the data presented in Table 1, it was found that vehicles equipped with the fuel activator provide an average fuel economy of 3.57 -13.03%. There is a relation between engine capacity and fuel consumption. This device provides the best performance for reducing fuel consumption on small engines, which is clearly shown in Figure 1. Fig. 1. Engine capacity and fuel economy It can also be argued that the efficiency of the activator depends not only on the volume of the engine, but also on its quality characteristics. To confirm the results of the experiment, a series of tests for emissions of harmful substances was also carried out on one of the test vehicles. The measurement number from 1 to 5 tests corresponds to an interval of 10 seconds, the test results are shown in Table 3. The measurements were carried out according to the following parameters: CO2,%, CO, mg, SO2, mg, NO2, mg before and after installation of the activator. The first series of tests of the fuel activator consisted in measuring the content of harmful compounds in the exhaust gases of the tested car with a gas analyzer COMETA-M № 30164 in two engine operating modes of 2000 rpm and 3000 rpm at an ambient temperature of 26 ° C. The second series of tests of the fuel activator consisted in measuring the content of harmful compounds in the exhaust gases of the tested car with a gas analyzer Drager X-am 5000 No. 8318704 in two engine operating modes 2000 rpm and 3000 rpm at an ambient temperature of 26 ° C. Measurement numbers from 1 to 5 tests correspond to an interval of 10 seconds, the test results are shown in Table 4. The third series of tests of the fuel activator consisted in measuring the content of harmful compounds in the exhaust gases of the tested car with a gas analyzer COMETA-M № 30164 in two engine operating modes 2000 rpm and 3000 rpm at an ambient temperature of 23 ° С. Measurement number from 1 to 15 tests corresponds to an interval of 10 seconds, the test results are shown in table 5. Suzuki Grand Vitara 2.7 liter without a catalytic converter was used in all three series of tests; the same RON-92 gasoline purchased from the Gazprom filling station network was used as fuel. The analysis of the calculated average measurement values before and after the installation of the fuel activator for the content of harmful compounds in the exhaust gases of the tested vehicle for two test cycles of 2000 rpm and 3000 rpm is presented in Table 6. In general, the results of the conducted tests of the fuel activator to measure the content of harmful compounds in the exhaust gases made it possible to make the following conclusions. In the operating mode of the internal combustion engine at 3000 rpm, a decrease in the CO indicator to zero is observed, which occurs faster with the use of a fuel activator device. The indicators of SO2 and NO2 with the use of a fuel activator decreased, but the parameters of CO2 did not change. In the engine operating mode of 2000 rpm, a more significant decrease in the average measuring values was also observed at the end of the test cycle with the fuel activator installed. So, according to table 6, the indicator CO2,% decreased (from 7.06 to 5.51), the relative change was -1.49%, CO, mg decreased (from 373 to 194), the absolute change was -179 mg, SO2, mg decreased (from 21, 8 to 16.62), the change was -5.18 mg, and NO2, mg decreased (from 0.42 to 0.06), the change was -0.36 mg. Conclusions Thus, according to the results of two series of conducted tests of the fuel activator, it can be concluded that the installation of a fuel activator in the section of the fuel system of a car leads to significant fuel savings and, as a consequence, a reduction in the average value of harmful emissions of exhaust gases from cars with internal combustion engines, which fully complies with standard foreign researchers such as: Fontaras, G., Rexeis, M., Dilara, P., Hausberger, S., Anagnostopoulos, K., David L. Greene, John DeCicco., Pasaoglu, G., Honselaar, M. and Thiel, C., Tang, B., Wu, X. and Zhang, X. and others [6,10,12,17,18]. The operation of the device is based on an innovative way of influencing hydrocarbon molecules (gasoline, diesel fuel, methanol, fuel oil, etc.) by electromagnetic oscillations of the resonant frequency. The principle allows obtaining a directed explosion in the combustion chamber of an internal combustion engine, which propagates faster than usually and with greater intensity. This significantly improves the traction characteristics of the combustion engine and significantly reduces harmful emissions into the atmosphere.
2,336.6
2020-01-01T00:00:00.000
[ "Engineering" ]
Semi-Automated Data Analysis for Ion-Selective Electrodes and Arrays Using the R Package ISEtools A new software package, ISEtools, is introduced for use within the popular open-source programming language R that allows Bayesian statistical data analysis techniques to be implemented in a straightforward manner. Incorporating all collected data simultaneously, this Bayesian approach naturally accommodates sensor arrays and provides improved limit of detection estimates, including providing appropriate uncertainty estimates. Utilising >1500 lines of code, ISEtools provides a set of three core functions—loadISEdata, describeISE, and analyseISE— for analysing ion-selective electrode data using the Nikolskii–Eisenman equation. The functions call, fit, and extract results from Bayesian models, automatically determining data structures, applying appropriate models, and returning results in an easily interpretable manner and with publication-ready figures. Importantly, while advanced statistical and computationally intensive methods are employed, the functions are designed to be accessible to non-specialists. Here we describe basic features of the package, demonstrated through a worked environmental application. Introduction The R programming language [1] has transformed modern data analysis, allowing free access to its core features and thousands of contributed packages. Here, we introduce ISEtools [2], a new package for the statistical analysis of data from ion-selective electrodes (ISEs) available on the Comprehensive R Archive Network (CRAN) [1] and easily installed within R. We also note notation, where R packages are in boldface, while R functions or commands are in Courier New font. ISEtools implements Bayesian models described by Dillingham et al. [3] that allow analyses to be conducted with single ISEs or jointly using data from sensor arrays. Its intent is to be a resource for the ISE community enabling statistical best practices, with new features added over time according to community demands and needs. Advances in our understanding of the mechanisms of ISE response has led to increased sensitivity, miniaturisation, and simplified experimental protocols. This has enabled their application in challenging areas, including environmental analysis, wearable sensors, and medical applications [4][5][6][7]. These demanding applications require more sophisticated data analysis techniques, which can accommodate both the multivariate data that arises from sensor arrays and the underlying non-linear response of ISEs. ISEtools implements advanced techniques in a straightforward manner, making these analyses accessible to researchers and practitioners alike. Currently, there is a wide range of reporting practices and methodologies employed by the ISE community, often related to the statistical and computational knowledge of researchers. While best practice should include reporting point estimates of parameters and estimates of their uncertainty [8], it is common in the ISE community to only provide point estimates of important parameters such as slopes, analyte activities, or limits of detection (LOD). Moreover, uncertainty estimates should be expressed through confidence or credible intervals, rather than simply as standard errors, to incorporate the uncertainty due to sample size or the asymmetric shapes of confidence intervals for some ISE parameters. The statistical skills required to implement these best practices depends on the parameter being estimated and the data collected. Examples of advances in estimation include non-linear regression [9], Bayesian techniques that simultaneously utilise measurements from two different ISEs for source separation [10] or arrays of redundant sensors to improve precision [3], and neural networks that incorporate environmental patterns [11]. These statistical approaches add value by synthesising all available data. For example, consider the relatively simple case of estimating the slope of the Nernstian portion of the ISE response curve using linear versus non-linear regression. If using linear regression, uncertainty in the slope is linked to a t-distribution with n -2 degrees of freedom (df) for the linear case, or n -3 df for the nonlinear case. However, n in linear regression is restricted to calibration data in the Nernstian region, while n in non-linear regression is based on all calibration data. This means that a seven-point calibration with three points in the linear range would have a t-based multiplier of t 1 = 12.7 for a 95% confidence interval if using linear regression, but t 6 = 2.4 for non-linear regression. That is, when employing linear approximations that use only a subset of the data collected, information is lost and uncertainty increases. Often, a follow-on result is that uncertainty is neither estimated nor reported. Similar issues arise when estimating the activities of experimental samples, compounded by asymmetric sampling distributions in some regions of the response curve or when using standard addition techniques. Similarly, if defined in a probabilistic manner in accordance with IUPAC recommendations [12], the LOD is a highly non-linear function of three parameters resulting in a skewed distribution that may have substantial uncertainty [13]. The goal of ISEtools is to make implementing best practices [8,12,14] as simple as possible, for as wide a range of data as possible, and for as many researchers as possible. The version introduced here (Version 3.1.1) implements statistical methods described by [3,13] for single ISEs or ISE arrays of redundant sensors, allowing the estimation of model parameters, experimental activities, and LODs. Additional functionality will be introduced in future versions (e.g., current projects include developing methods to accommodate sensor arrays measuring multiple analytes, estimating LOD for an entire sensor array, and improving statistical methods for estimating selectivity coefficients). Substantial automation means that researchers can simply load data stored in a text file (e.g., typically from a spreadsheet) using the command loadISEdata. Once loaded, the ISE(s) can be characterised using the command describeISE. If the data includes experimental samples, activities can be estimated using the command analyseISE. In the background, the software package determines the structure of the data (e.g., the number of ISEs or whether standard addition was used, which Bayesian model is appropriate for analysis, and the initial values for numerical procedures). It also calls specialist software to implement the model and processes results for easy interpretability and clear graphical representation. Materials and Methods This section describes the basic statistical model used, data structures supported, a computational overview, and implementation of an analysis of lead in soil. Technical details, definitions, and numerous options are not presented. Instead, readers are referred to the vignette for a detailed description of the package and help files for individual functions. The vignette (ISEtools.pdf) is available from https://CRAN.R-project.org/package=ISEtools, and is also accessible after installing R and loading the ISEtools library. Individual help files (e.g., for loadISEdata) are also available once the ISEtools library is loaded. These are accessed via: library(ISEtools); vignette("ISEtools"); help(loadISEdata) ISE Response Model Ion-selective electrodes convert analyte activity to an electrical signal [15], with the response described by the empirical version of the Nikolskii-Eisenman equation [3,16] and parameterised as: where y is the electromotive force (emf) response of the ISE; x is the activity of the ion of interest; a is a baseline emf; b is the slope, whose theoretical value is determined by the valence of the primary ion, temperature, and natural constants; c is a parameter linked to the interfering ions within the chemical matrix and the selectivity of the ISE to those ions; and the random emf noise follows a normal distribution (i.e., error ∼ normal(0, sigma)). Figure 1 shows the expected response of a single ISE, including the flat region (when activity x << c) that cannot reliably be distinguished from a blank, and the Nernstian region (when x >> c) in which linear regression methods may be usefully employed. ISEtools was specifically developed for applications with data across the full response curve, but also works for datasets entirely within the Nernstian region. Computational Overview ISEtools implements Bayesian methods described by [3,13], and operates within R [1] or the related RStudio [17], interfacing with an additional programme to run the Bayesian analyses, OpenBUGS [18] or jags [19], both based on the BUGS [20] language. Users with basic familiarity of R or other scripting languages will have an advantage getting started, but will not need familiarity with the Bayesian programmes. Installation requires R or RStudio as well as OpenBUGS or jags, all of which are free, easily accessible through simple web searches, and straightforward to install. We recommend OpenBUGS, but include a jags option for macOS users. Users must also install R packages ISEtools, Xmisc [21], coda [22], and either R2WinBUGS [23] and BRugs [18] (if using OpenBUGS) or rjags [24] (if using jags) through the built-in interface in R or RStudio. Xmisc and coda are often installed automatically as dependencies of ISEtools, depending on user settings in R. Users use three core functions: loadISEdata to import and process the ISE data, describeISE to characterise ISEs using calibration data (e.g., to estimate model parameters, LODs, and their uncertainty), and analyseISE to estimate unknown activities of experimental samples. In conjunction with each function are print, summary, and plot commands to summarise and visualise model output. Data Structures and Estimation ISEtools is designed to work with calibration data, where x and y are both observed, to estimate model parameters a, b, c, and sigma. This also allows estimation of LOD α,β based on rates of false positives (α) and negatives (β) as recommended by IUPAC [12], using defaults of α = β = 0.05. We note that this is not the commonly used LOD calculation for ISEs [25], which is based on the intersection of the Nernstian line with a blank but does not meet general IUPAC recommendations for LODs. When combined with experimental data, where y is observed but x is unknown, inverse methods [3] are used to estimate unknown activities (x), conditional on the model parameters estimated from the calibration data. ISEtools accommodates experimental data in Basic format, where an emf is recorded for each experimental sample, or in Standard Addition format, where an aliquot with known activity and volume is added to each experimental sample and emf is recorded before and after the addition. The structure of the data files for an array of three ISEs is shown for calibration data (Figure 2a), experimental data in the Basic format (Figure 2b), and experimental data in the Standard Addition format (Figure 2c). Variable definitions are intuitive and fully described in the ISEtools vignette and help files. The calibration data includes variables ISEID (indicating which ISE recorded the data), log10x (the log of the known activity of the calibration samples), and emf (the recorded emf in mV). The experimental data in the Basic format has variables ISEID, emf, and SampleID (indicating which sample is being measured). The experimental data in the Standard Addition format has variables ISEID, SampleID, emf1 (the emf before the aliquot is added), emf2 (the emf after the aliquot is added), V.s (the volume of the original sample), V.add (the volume of the aliquot), and conc.add (the activity of the aliquot). ISEtools employs Bayesian methods rather than alternatives such as non-linear regression via least squares or maximum likelihood. Briefly, any prior knowledge about random variables (e.g., model parameters or analyte activity) is updated by a probability model linking observed data to those parameters. The probability model is based on the Nikolskii-Eisenman equation (Equation (1)), with adaptations for sensor arrays or standard addition data where appropriate. This provides a posterior probability distribution for the random variables via Bayes' theorem, typically presented as credible intervals that are broadly analogous to confidence intervals. Relative to other ISE data analysis approaches, the key benefit of Bayesian methods is their ability to incorporate all data into a single model and maximise information. Particularly, computational tools such as OpenBUGS and jags allow consistent implementation of both simple and complex models. Further details of the statistical models are available in the Supporting Information of [3], particularly Equations S-1 through S-3, with OpenBUGS code shown in Figures S-2 through S-4. For the simplest case of characterising ISE parameters using calibration data, Bayesian methods produce point estimates and uncertainty intervals very similar to those from non-linear regression. Similarly, if only one ISE is used and experimental data are in Basic format, inversion of a prediction interval from non-linear regression produces similar estimates and intervals. If all data are in the Nernstian region, these both simplify to linear regression results. However, in other contexts the Bayesian approach has clear advantages including supporting complex sampling distributions, non-standard data sources, and multivariate data from sensor arrays [3,10,13]. These advantages revolve around: (1) LOD estimation, where the ability to sample from the joint posterior distribution of model parameters allows straightforward calculation of its distribution; (2) estimation of activity when standard addition is used and the sampling distribution may be highly asymmetric; and (3) estimation of activity when multiple sensors are used in an array. For this final case, the Bayesian treatment of unknown activity x as a random variable allows all available data (e.g., multiple ISEs of varying quality measuring the same sample) to be used simultaneously to find the posterior distribution for x and calculate its 95% credible interval. Further, the model appropriately weighs data based on individual sensor quality (i.e., noisy sensors are automatically down-weighted). ISEtools currently accommodates sensor arrays of the same type of ISE, and future versions will accommodate arrays of different ISEs. Analysis of Lead in Soil Next, we show a worked example using ISEtools from an array of three solid-contact ISEs measuring lead in soil. The purpose of this example is to demonstrate the implementation and relative ease of analysis using ISEtools. Manufacturing methods for the ISE array have been described previously [26]. Briefly, the Pb 2+ -selective membrane was prepared by dissolving 5 mmol kg −1 sodium tetrakis[3,5-bis(trifluoromethyl)] phenylborate, 12 mmol kg −1 lead ionophore IV, 32 wt% poly(vinyl chloride) (PVC) and 66 wt% of bis(2-ethylhexyl)sebacate in tetrahydrofuran. This composition was based on a membrane developed for trace-level measurement of lead in rivers, lakes, and tap water [27]. Three solid-contact electrodes were prepared from Teflon-coated copper rods (3 mm diameter). The face of the sensing end was polished and sputter-coated with gold before adding a protective sleeve of PVC tubing. The gold was then coated with a drop-cast layer of a polyoctylthiophene conducting polymer before drop casting the Pb 2+ -selective membrane onto the conducting polymer layer (Figure 3). Each of the Pb 2+ solid-contact ISEs were prepared from gold-coated copper rods in a sheath of Teflon tubing (a). PVC tubing was then added to provide a protective sleeve for the sensing face (b). Next, a conducting polymer layer was drop cast onto the gold surface (c), before drop casting the Pb 2+ -selective membrane (d). Finally, three Pb 2+ -selective membranes were combined into a three-ISE array (e). Soil samples, collected at abandoned mining sites near Silvermines, County Tipperary, Ireland, were dried and ground before extraction of exchangeable metals through sonication in 1.0 × 10 −3 M HNO 3 . A four-channel, high-input impedance data acquisition system (World Precision Instruments, Sarasota, FL) was used to simultaneously measure the differences in potential between the three ISEs and a silver/silver chloride reference electrode. After the ISE array was calibrated, Pb 2+ activity in each sample was estimated using a standard addition approach. All emf values were corrected for liquid-junction potential using the Henderson equation, and ion activities were calculated according to the Debye-Hückel approximation. Here, we used the calibration data and standard addition data from the 17 soil samples to estimate the Pb 2+ activity and characterise the three ISEs. Data were first stored in an Excel file but subsequently saved as tab-delimited text files in the format expected by ISEtools and are included with the ISEtools package in the "/extdata" sub-folder of the ISEtools library (e.g., <pathname to R libraries>/ISEtools/extdata). In the example below, the pathname was "C:/Program Files/R/R-3.5.2/library/". In addition to calibration data (Lead_calibration.txt), the experimental data are available in the Basic (Lead_experimentalBasic.txt) and Standard Addition (Lead_experimentalSA.txt) formats ( Figure 2). Results Computational details, syntax, and analysis results for the analysis discussed previously are described below, demonstrating the three key functions of ISEtools. Loading Lead Data After installing all required software and R packages, the ISEtools library was called and the lead data was loaded using loadISEdata. This simply required specifying the locations of the calibration and experimental data files: library(ISEtools) lead.example = loadISEdata( filename.calibration = "C:/Program Files/R/R-3.5.2/library/ ISEtools/extdata/Lead_calibration.txt", filename.experimental = "C:/Program Files/R/R-3.5.2/library/ ISEtools/extdata/Lead_experimentalSA.txt") The loadISEdata function imports and processes the data, determining that multiple ISEs were used, and experimental data were present in Standard Addition format. Data can be further examined using print(lead.example) or plot(lead.example) to ensure there are no data entry errors or unusual datapoints, where print and plot functions have been customised for ISEdata objects. Characterising the ISEs Once satisfied with the data quality, ISE model parameters are estimated via describeISE. The describeISE function takes data loaded using loadISEdata, the valence (Z) of the primary ion (here, 2 for Pb 2+ ) and (optionally, if much different from room temperature) the temperature: lead.analysis1 = describeISE(lead.example, Z=2, temperature=21) Because loadISEdata pre-processes the text files, calibration data and the number of ISEs are automatically passed to describeISE. This allows describeISE to automatically apply the appropriate Bayesian model when it calls OpenBUGS or jags; users also have the option to specify their own model. At this point, OpenBUGS or jags implements Markov chain Monte Carlo methods, returning results to R and saved as lead.analysis1. Parameter estimates and the LOD, as well as lower and upper values from 95% credible intervals, are displayed using the print command, again customised to print relevant output for each ISE. For ISE #1, print(lead.analysis1) produces: From this, we see that ISE #1 had close to the ideal Nernstian slope, estimated as b = 29.5 mV/decade (95% CI 25.6-34.3 mV/decade), with an LOD near 10 −6 . This information can be used when developing new ISEs and when testing whether they are fit for purpose [13]. For example, lead activity for some of the experimental soil samples (Figure 4) was near this LOD, indicating that, by itself, ISE #1 would struggle to distinguish the lower activities at this site from a blank. If the full distribution of the parameters was of interest, plot(lead.analysis1) would be used instead. Estimating Activity of Experimental Samples To estimate analyte activity in experimental samples, analyseISE was used. Here, we used the three ISEs to analyse the 17 experimental soil samples measured using standard addition: lead.analysis2 = analyseISE(lead.example, Z=2, temperature=21) As with describeISE, the structure of the data informs the Bayesian model to use with analyseISE and engages OpenBUGS or jags. Again, print and plot commands can be applied to the results. Here, we plot the estimated activities, combining data from three ISEs (Figure 4), and include options to produce appropriate axis labels, specify the plot range, and set the colour using standard R options: plot(lead.analysis2, ylab=expression(paste("log ", italic(a)[Pbˆ{paste("2","+")}])), ylim=c(-7, -3), col="steelblue") Discussion One of the key benefits of the ISEtools package is that it provides access to advanced statistical models for the non-specialist. These models utilise all data (i.e., data beyond just the linear portion of the response) to provide improved activity estimates, particularly in the context of sensor arrays or standard addition data. They also estimate the limit of detection following recommendations by IUPAC and provide uncertainty for those estimates. This manuscript introduced the ISEtools package. Additional detail, including information on data formatting, numerical details, and advanced options, are described in the ISEtools vignette. We recommend that interested researchers read the vignette prior to running any analyses. For researchers and practitioners experienced with R, implementation of ISEtools should be straightforward. For those without experience, there is an R learning curve to overcome. Fortunately, with millions of users and its open-source nature, there are many "getting started" tutorials available on the web, along with numerous books. We note that many new users prefer RStudio to the base version of R. Finally, we reiterate that this is a developing package, and look to the ISE community for feedback and future development needs.
4,579.4
2019-10-01T00:00:00.000
[ "Computer Science", "Engineering" ]
High-entropy high-hardness metal carbides discovered by entropy descriptors High-entropy materials have attracted considerable interest due to the combination of useful properties and promising applications. Predicting their formation remains the major hindrance to the discovery of new systems. Here we propose a descriptor—entropy forming ability—for addressing synthesizability from first principles. The formalism, based on the energy distribution spectrum of randomized calculations, captures the accessibility of equally-sampled states near the ground state and quantifies configurational disorder capable of stabilizing high-entropy homogeneous phases. The methodology is applied to disordered refractory 5-metal carbides—promising candidates for high-hardness applications. The descriptor correctly predicts the ease with which compositions can be experimentally synthesized as rock-salt high-entropy homogeneous phases, validating the ansatz, and in some cases, going beyond intuition. Several of these materials exhibit hardness up to 50% higher than rule of mixtures estimations. The entropy descriptor method has the potential to accelerate the search for high-entropy systems by rationally combining first principles with experimental synthesis and characterization. MoNbTaVWC 5−x was synthesized using both hexagonal WC and W 2 C precursors, to determine their impact on the homogeneity of the final sample. The x-ray diffraction spectrum for the sample synthesized using WC is displayed in the top panel of Figure 37, while the spectrum for the sample prepared with W 2 C is shown in the bottom panel. Both spectra feature sharp peaks at similar values of 2θ, indicating that the choice of precursor has little effect on the structure of the high-entropy material. Supplementary Figure 38. Mechanical properties. (a) Load-displacement curves for 40 indents for HfNbTaTiZrC5 at a maximum load of 50mN. This curve provides the three necessary parameters -the peak load: Pmax, the depth at peak load: δmax, and the initial unloading contact stiffness: κ, (indicated by arrows for an indent) -for obtaining Vickers hardness: HV, and elastic modulus: Comparison of calculated and measured HV for 6 single-phase 5-metal carbides. All 5-metal carbides have a higher hardness than expected from their respective ROM predictions (indicated by the dotted upward arrows), whereas the calculated HV (green circles) are consistent with the ROM for the AFLOW-AEL results (blue diamonds). The measured HV for the 6 rock-salt structure binary carbide samples along with the calculated HV for rock-salt MoC and WC are also plotted. Error bars represent the standard deviations for a series of 40 indents. The experimentally measured load-displacement indentation curves for the hardness measurements for the HfNbTaTiZrC 5 sample are shown in Figure 38(a). Each curve provides the three parameters necessary to obtain the Vickers hardness H V and elastic modulus E [2, 3]: the peak load: P max , the depth at peak load: δ max , and the initial unloading contact stiffness: κ. The measured and calculated H V for 6 5-metal carbides along with 8 rock-salt binary carbides are plotted in Figure 38(b). The calculated H V are obtained from the thermally averaged bulk (B) and shear (G) moduli, weighted according to the Boltzmann distribution at a temperature of 2200 • C (the experimental sintering temperature), using the model of Chen et al. [4]. The experimentally measured hardness for all 5-metal carbides exceeds the rule of mixture (ROM) predictions, whereas the calculated values are consistent with the ROM for the AFLOW-AEL (Automatic Elasticity Library) results. The atomic disorder is not accounted for in the AFLOW-AEL calculations of the AFLOW-POCC ordered-configurations, suggesting that the measured enhancement in H V is disorder-driven. For HfNbTaTiZrC 5 , H V is about 50% higher than the ROM estimate. Note that the ROM predictions for Mo and/or W containing 5-metal carbides are obtained from the AFLOW-AEL calculations, since MoC and WC do not stabilize in a rock-salt phase at ambient temperature. The values of B and G for each of the 49 configurations of 6 5-metal carbides are listed in Supplementary Table 3. These elastic moduli are calculated using the Voigt-Reuss-Hill (VRH) average within the AEL module [5] of the AFLOW framework. Supplementary Table 3. B and G for each configuration of 6 5-metal carbides, as calculated using AFLOW-AEL. Units: B, G in GPa. conf. Supplementary The PARTCAR (the geometry input file for the AFLOW partial occupation (AFLOW-POCC) algorithm [6]) and initial POSCARs (the VASP [7] input file for the atomic geometry) for all 49 configurations of HfNbTaTiZrC 5 are presented below. Starting with the rock-salt crystal structure (spacegroup: F m3m, #225; Pearson symbol: cF8; AFLOW Prototype: AB cF8 225 a b [8]) as the input parent lattice, the AFLOW-POCC algorithm generates a set of 49 distinct configurations, each containing one atom of each of the 5 metals, along with 5 carbon atoms. This is the minimum cell size necessary to accurately reproduce the required stoichiometry: C atom with full occupancy at the anionic lattice site and 5 different refractory metal elements with a 0.2 occupancy probability for each at the cationic lattice site. The degeneracy for each configuration g i is given by DG in the header of each POSCAR. For HfNbTaTiZrC 5 , all configurations have g i = 10, except for the 49 th where g i = 120. Each of the other 5-metal carbides also have the same 49 distinct configurations, with the same number of anions and cations in the unit cell, but with a different set of 5 refractory metal elements at the cationic sites. Note that the numerical designation of the configurations can vary from system to system. The total degeneracy ( i g i = 600) is same for all 5-metal systems.
1,275.4
2018-10-22T00:00:00.000
[ "Materials Science", "Computer Science" ]
Scheme for teleporting an arbitrary superposition of atomic Dicke states via multi-fold coincidence detection We propose a novel scheme for teleporting an arbitrary superposition of two-atom Dicke states with atoms trapped in optical cavities. The scheme requires a multi-fold coincidence detection and is insensitive to the imperfection of the photon detectors, and the fidelity is unity for any superposition of Dicke states. Further, we also point out that scheme can be extended to teleport an arbitrary superposition of N-atom Dicke states with unit fidelity. Introduction Quantum teleportation is not only a fundamental phenomenon of the quantum world, but also one of the key procedures in the area of quantum information processing [1]. By teleportation, an unknown quantum state can be transported from one place to another without moving through the intervening space. Since the pioneering contribution of Bennett et al [1], quantum teleportation was first demonstrated experimentally by using spontaneous parametric down-conversions [2]. maximally entangled state between atoms and two-mode cavity fields. In the detection stage, Alice uses photon detectors to measure the photons leaking out from two cavities. If each of the four detectors registers a single click during this stage, the scheme is successful, superposition of Dicke states is transferred from cavity A to B with unit fidelity. In the scheme, imperfection of the photon detectors decreases the success probability, but has no influence on the fidelity of the teleportation. We now analyse the scheme in detail. The level structure of atoms is shown in figure 2, which has four ground states and three excited states. For concreteness, we consider a possible implementation using 87 Rb, whose usefulness in the quantum information context has been demonstrated in recent experiments [8]. The ground states |g L , |g 0 , |g R correspond to |F = 1, m = −1 , |F = 1, m = 0 and |F = 1, m = 1 of 5 2 S 1/2 , respectively, and one could use |F = 2, m = 0 of 5 2 S 1/2 as the ground state |g a . The excited states |e L , |e 0 and |e R correspond to |F = 1, m = −1 , |F = 1, m = 0 and |F = 1, m = 1 of 5 2 P 3/2 , respectively. The lifetimes of the atomic levels |g L , |g R , |g 0 , |g a are comparatively long so that spontaneous decay of these states can be neglected. We encode the ground states |g L and |g R as logic zero and one states, i.e. |g L = |0 and |g R = |1 . The transitions |e 0 ⇐⇒ |g R and |e L ⇐⇒ |g 0 (|e 0 ⇐⇒ |g L and |e R ⇐⇒ |g 0 ) are coupled to cavity mode a L (a R ) with the left-circular (right-circular) polarization. On Alice's side, the transition |e L ⇐⇒ |g L and |e R ⇐⇒ |g R are driven by π-polarized classical fields with the Rabi frequency A (t). The transition between |e 0 and |g 0 is electric dipole forbidden [20]. On Bob's side, one use π-polarized classical fields to drive the transition |e 0 ⇐⇒ |g a with the Rabi frequency B (t). Alice's qubit is encoded in the two Zeeman sublevels |g L and |g R . For the initial state equation (2), only four transitions are included. |e L A → |g 0 A (|e R A → |g 0 A ) is coupled to the left-circularly (right-circularly) polarized mode of the cavity. |e L A ⇐⇒ |g L A and |e R A ⇐⇒ |g R A are driven by π-polarized classical fields, and the transition between |e 0 A and |g 0 A is electric dipole forbidden. (b) The involved atomic levels and transitions for each atom of Bob. Bob's qubit is also encoded in the two Zeeman sublevels |g L and |g R . For initial state |g a , g a |0, 0 , only three transitions are included. |e 0 B → |g R B (|e 0 B → |g L B ) is coupled to the left-circularly (right-circularly) polarized mode of the cavity. |e 0 B ⇐⇒ |g a B is driven by π-polarized classical fields. Initially the atomic state in cavity A is prepared in a Dicke-state superposition of the form where coefficients C i are arbitrary and satisfy |C 1 | 2 + |C 2 | 2 + |C 3 | 2 = 1. We also assume that two cavity modes are in the vacuum state |0, 0 A . Thus, the initial state of the system is 5 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT For such an initial state, only four transitions are involved as depicted in figure 2(a), and the Hamiltonian describing the system is given by where a † L and a † R denote the creation operators for the corresponding polarized mode of the cavity. g A is the atom-cavity coupling, and the time dependence of the A (t) can be controlled by the external laser field. The Hamiltonian (3) has the following orthogonal dark states Based on the dark states equations (4)-(6), one can map the two-atom Dicke states equation (1) into the two-photon cavity states. Initially, we choose the parameters to satisfy A g A . In the limit of A (t) g A , the dark states |D LL , |D LR and |D RR coincide with the states |g L , g L |0, 0 A , (|g L , g R + |g R , g L )|0, 0 A / √ 2 and |g R , g R |0, 0 A , respectively. Therefore, by adiabatically increasing the coupling A (t), the initial state (2) of the system evolves in the dark space into the following state at the time t 6 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT For Bob, two atoms are initially prepared in the ground states |g a , g a and cavity fields are in the vacuum states. For such an initial state, only three transitions are involved as depicted in figure 2(b), and the Hamiltonian describing the system is given by By adiabatically tuning B (t) to go from B g B , the initial state |g a , g a |0, 0 B evolves into the following dark state at the time t In order to consider the effect of the cavity decay and photon observation on the state evolution in the physical model, it is convenient to follow a quantum trajectory description [21]. The evolution of the system's wave function is governed by a non-Hermitian Hamiltonian as long as no photon decays from the cavity. In this case, the state of the atom-cavity system j(j = A, B) at the time t evolves into Here we assume that two optical cavities have the same loss rate κ for the all modes, and H A and H B correspond to the equations (3) and (8). Following [19,21], a direct calculation gives the following results for Alice's state at the time t with the success probability . (3) and (8) are applied to the atom-cavity system A and B simultaneously, so that the preparation of the atom-cavity states | (t) j (j = A, B) ends at the same time. The joint state of two atom-cavity systems A and B is given by If we assume that the interaction Hamiltonian This implements the preparation stage of the protocol. The success probability is given by P suc = P A P B , i.e. the probability that no photon decays from either atom-cavity system during the preparation. If the conditions j e −κt g j are satisfied, the last terms in equations (12) and (13) are much larger than other terms, then | (t) A and | (t) B are reduced into the forms and which demonstrates that Alice maps the unknown Dicke-state superposition equation (1) into the two-mode state (15) and Bob generates a maximally entangled state between atoms and two-mode cavity fields (16). Now we consider the detection stage, in which we make a photon number measurement with four photon detectors D j (j = AH, AV, BH, BV) on the output modes of the set-up. We assume that photons are detected at the time τ. This assumption is posed to calculate the system's time evolution during this time interval in a consistent way with the 'no-photon-emission-Hamiltonian' (10). The detection of one photon with the detector D j (j = AH, AV, BH, BV) can 8 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT be formulated with the operator b j on the joint state | (τ) A | (τ) B . As shown in figure 1, the photons leaking out from the cavities A and B are mixed at the PBS, whose action is to transmit the horizontal polarization and reflect vertical polarization. Since the photons coming from cavities are circularly polarized, two QWPs are inserted before PBS to map the left-circular photons (right-circular photons) into the horizontal polarization photons (vertical polarization photons). After leaving the PBS, photon polarizations are rotated by polarization-rotations, whose actions are given by transformation a H → cos θa H + sin θa V and a V → cos θa V − sin θa H , where θ is rotation angle and will be determined later. Thus the operators of the four detectors have the following forms If each of the four detectors detects one photon, the state of the total system is projected into If we choose parameter θ = π/8, equation (18) becomes Based on the four-photon coincidence, Bob performs local unitary operations 1 to his atoms to transform state (19) into equation (1), the teleportation is thus finished. The success probability to achieving four-photon coincidence is P succ = (1 − e −2κτ ) 4 /12. We now give a brief discussion on the influence of the quantum noise on the scheme. Firstly, it is evident that the scheme is inherently robust to photon loss, which includes the contribution from channel attenuation, and the inefficiency of the photon detectors. All these kinds of noise can be considered by an overall photon loss probability η [4]. It is noticed that the present scheme is based on the four-photon coincidence detection. If one photon is lost, a click from each of the detectors is never recorded. In this case, the scheme fails to teleport superposition of Dickestates. Therefore the imperfection of photon detectors only decreases the success probability P succ by a factor of (1 − η) 4 , but have no influence on the fidelity of the expected operation. Next we show that the scheme is insensitive to the phase accumulated by the photons on their way from the ions to the place where they are detected. The phases ϕ 1 = kL 1 and ϕ 2 = kL 2 , where k is the wavenumber and L j are the optical lengths which photons travel from the jth cavity towards the photon detectors, lead only to a multiplicative factor e i2(ϕ 1 +ϕ 2 ) in equation (19). This result demonstrates that phase accumulated by the photons has no effect on the conditional implementation of the quantum operation. Thirdly, since the scheme is based on adiabatic passage technology, atomic excited states can be decoupled from the evolution, and the influence of atomic spontaneous emission on the teleportation could be suppressed. 9 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT Teleportation of N-atom Dicke states In this section, we turn to the problem of teleportation of N-atom Dicke states from one cavity to another. We use the notation |g ⊗m L g ⊗N−m R to denote a normalized Dicke state where m atoms are in the level g L and N a − m atoms are in the level g R . The atomic state in cavity A is initially prepared in Dicke-state superposition of the form and cavity fields are in the vacuum states. For such an initial state, the Hamiltonian describing the system is still given by equation (3), which has the following dark states Thus, by adiabatically adjusting the coupling A (t) from A g A to A g A , based on the dark states, one can map the N-atom Dicke superposition states into the two-mode N-photon states For Bob, N atoms are initially prepared in the state |g a and cavity fields are in the vacuum states. For such initial state, the Hamiltonian describing the system is given by equation (8). In the adiabatic limit, the initial state |g a ⊗N |0, 0 B evolves into the following dark state By adiabatically tuning B (t) to go from B g B to B g B , achieves the maximally entangled states of N-atom Dicke states and N-photon polarization states In order to realize teleportation, photons which originate from the state | A and | B are injected into the set-up shown in the state of the total system is projected into Based on the measurement result, Bob performs local operation to transform equation (27) to equation (20). This demonstrates that the proposed set-up, which is shown in figure 3, definitely implements the teleportation of N-atom Dick-state superposition with unit fidelity. The success probability of conditional realization is P meas = N 1−2N N i=1 cos θ 4 i , which decreases exponentially with increasing N. The quantum noise has no influence on the fidelity of the teleportation, but decreases the success probability (1 − η) 2 N. In summary, we have presented schemes to teleport an arbitrary superposition of two-atom Dicke states from one cavity to another. In contrast to the scheme [19], our scheme requires 2 The action of measurement device M is to project on the two-mode N-photon entangled states | N = N m=0 |mH, (N − m)V a . In terms of creation operators, such an N-photon state can be written in the form | N = (a † H cos θ N + a † V sin θ N e −iϕ N ) · · · (a † H cos θ 1 + a † V sin θ 1 e −iϕ 1 )|0, 0 a , where parameters tan θ 1 e −iϕ 1 , . . . , tan θ N e −iϕ N are chosen to be the N complex roots of equation (26). Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT multi-photon coincidence detection, and is insensitive to quantum noise, which has no influence on the fidelity of the teleportation, but decreases the success probability. The fidelity is unity for any superposition of Dicke states. Finally, it is briefly pointed out that scheme can be modified to teleport a superposition of N-atom Dicke states.
3,460.8
2006-10-01T00:00:00.000
[ "Physics" ]
Effect of Cockele Shells on Mortars Performance in Extreme Conditions Abstract This paper studies the use of cockle shell as supplementary cementitious materials SCMs as substitute for cement. The cockle shells generally have a high CaO content which can alter the behavior and the properties of mortars and concrete. Cockle shell is used with weight ratios of 5, 10, 15 and 20% to formulate a mortar with cockle shell and a control mortar CM with 0% of cockle shell. The properties in the fresh state, the mechanical strength and the weight loss test as well as the depth of penetration of each mixture were carried out through the conducted experiments. Consistency and density of fresh mortars were determined, the results obtained showed that cockle shell have a significant influence on the properties of mortars in the fresh state. The different results of hardened mortars show that the introduction of cockle shell tends to accelerate the development kinetics of strength at the young age but its ratio cannot be above of 5%. Mortar with 10% presented the lower depth penetration, the loss weight increased proportionally with the increasing of cockle shell amount. INTRODUCTION The use of waste and by product materials instead of raw materials in cement and concrete mixture was the best process to achieve the development of the concrete industry [1].In order to reduce the dependency on virgin materials for construction, efforts have been made to incorporate by-products and wastes from different industries as alternatives in concrete.The search for a supplementary cementitious materials (SCMs) has become a major concern for the manufacture of cements, recently several researchers have been proven that the use of SCMs improve mechanical performances and durability of cements [2].One of potential waste material that is available in nature is waste seashells.There are several types of waste seashell available, such as cockle shells, oyster shells, mussel shells, scallop shells.Every year about 10 million tons of shells are disposed of in landfills in China, which is the largest shellfish producer in the world [3].In France 160,000 tons of seashells are produced by shellfish farming industry and about 45,000 tons of shellfish per year from fishing [4].When seashell wastes left for a long time, it can lead to microbial decomposition of salts into gases like hydrogen sulphide, and ammonia which are very harmful to nature [5].The use of seashell in concrete is studding by a few researchers, Pusit Lertwattanaruk et al. [6] used ground seashells as supplementary cementitious materials, their results indicated that ground seashells can be applied as a SCMs in mortars, they found that mortars with ground seashells present an adequate strength, less shrinkage with drying and lower thermal conductivity compared to the control mortar.Wen-Ten Kuo et al. [7], and Gil-Lim Yoon et al. [8] used the waste oyster shells as replacement in sand, they showed that oyster shells can be resources in replacement of sand , their results demonstrated that there was no significant reduction in the compressive strength up to 20% [7] and 40% [8] of dosages of waste oyster shells sand instead of sand.Monita Olivia et al. [1] also used ground cockle shell as SCMs, they found that the compressive strength has been an optimum value by 4% of replacement, while the tensile strength and flexion have been shown better results with the seashell concrete.Dang Hanh Nguyen et al. [9] found that the crushed seashells can be used as a replacement in the previous concrete composition, for low-volume roads, they showed that an acceptable durability of pervious concrete with and without crushed shells for the application of low traffic load.According to Monita Olivia et al. [10], the substitute of the Portland cement with the ground cockle shell and clam shell influenced in physical and mechanical properties of the concrete, the OPC cockle concrete showed lower setting time, density and tensile strength than the OPC concrete.E.I. Yang et al. [11] studied the durability of concrete with oyster seashell partially substituted for fine aggregate.They noted that, the strength, the freeze thaw resistance, drying shrinkage, and permeability are significantly affected by the increase of oyster seashell content, essentially for long-term, while carbonation and chemical attack were not substantially affected.Carolina Martínez-García et al. [12] use the mussel shell as fine aggregate in concrete, they proved that weight loss increases even in small percentage, the increase in weight loss is related to the high-water absorption of mussel shell aggregate in comparison with natural sand.Yang et al. [13] found that the workability of the concrete, decreased when the substitution rate of oyster shell increased, they used oyster shell as fine aggregate replacement in concrete.The work presented in this paper aims to valorise the use of cockle shells and demonstrate that it can be reused as SCMs as substitute for cement, based on their chemical and mechanical properties, studying their effect on mechanical and chemical performance of different mortars. MATERIALS Natural sand class 0/2 from Felfela, Skikda, Algeria is used to make mortar mixtures complied with NF EN-196-1, with a specific gravity of 2.53; the sand equivalent value is 79.59% and fineness modulus of 3.30.The Ordinary Portland cement type I CEMI 42.5 was used as a binder, it is delivered by the cement company of Ain Kbira Company (setif) Algeria.It has a density of 3.07 and the Blaine surface area is about 3700cm 2 /g.The cockle shells were collected from the beach of Guerbaz region of Skikda Algeria.The cockle shells were cleaned, dried, crushed into small pieces using a crusher and grinding in a standardized ball mill with a capacity of 10 kg.After its preparation for use in this experiment, its specific surface was about 6550 cm 2 /g, and has a specific gravity of 2.98.The chemical composition of cockle shell powder and cement are shown in Table1.Cockle shells are introduced as a SCMs cement substitute.Potable water was used in all the mixes and curing of the specimens. TEST AND MIXING PROCEDURES Test specimens of mortar were prepared according to the standard EN196-1 [14]. All mixtures are fixed the same ratio water on cement W/C by 0.57.The test specimens used for the compressive strength test, Flexion strength test, and immersion absorption test and chloride penetration had dimensions of 4×4×16 cm and those used for the weight loss test had dimensions of 5×5×5 cm.Cockle shells powders were used to partially replace cement in the mortar at replacement weight ratios of 5, 10, 15 and 20%.A control mortar (CM) specimen (0% of cockle shell) was used as a reference.Three specimens were used for each age.Specimens produced from fresh mortar were remolded after 24 h and were then cured in water at 20 ± 2 °C until the date of the test. Consistency of mortars The slump and the density of various mortars are evaluated according to the standard NF EN 1015-3 and NF EN 1015-6 respectively [15,16] The results of consistency and density of CM and mortars with cockle seashells are shown in Fig. 1 and Fig. 2. Fig. 1. Consistency of mortars containing cockle shell There was a slight increase of slump values with increasing percentage of cockle -shells until 10%, the replacement of cement with ground seashells decreased the amount of cementitious material and increased the free water content in the mix.Beyond 10% the slump values were turn to decrease.The water requirement of cockle shell became higher owing to its relatively high specific surface area of the binder (cement +cockle shell) [9].From Fig. 2, we can observe that the density of the mortars was not stable; the density was diminished with a low substitute of cockle shell (5%), and then returned to increase in a lean way.The mortar with 15% of cockle shell had a density similar to the mortar 5% cockle shell, beyond 15% of cockle shell the density increased of a significant way, the mortar 20% shell had a density 2,189 which presented the highest density. Compressive and flexural tensile strength of mortars After 2, 7, 28 and 90 days of water curing, the 4×4×16 cm samples were used for compressive and flexural tensile strength tests.The results are shown in Fig. 3 and Fig. 4, respectively.According to the results obtained (Fig. 3), we observe that the compressive strength of all mortars studied, increases regularly with age and shows no fall until the 90 days.Increasing the percentage replacement of cockle shell, tended to reduce the compressive strength of the mortars, because of the less reactive material of cockle shell mixed with the Portland cement and the reduction of the reactive part exists in the cement.At early age (2 days) mortar with 5% of cockle shell showed the best compressive and flexural tensile strength, it can be attributed to the height Blaine surface area of cockle shell.The mortars with 10% and 15 % showed a similar compressive strength to CM (0%).At 28 and 90 days compressive strength of mortars with 5% of cockle shell falling back in comparison with compressive strength of CM, this was likely due to a decrease of the cement content that could reduce the rate of hydration in mortar.It must be noted that cockle shell increase compressive strength at early age and the percentage of cockle shell must limited to 5% or less, Monita, olivia et al. found that the compressive strength has been an optimum value of 4% of replacement [1].The illustrated results of the flexural strength test (Fig. 4) show similar tendencies to compressive strength.However, the flexural strengths of mortars with 5% of cockle shell were low compared to those of the control mortar at 7 days.In the long term (at 28 and 90 days) the flexural strength decreases with increasing cockle shell content. Water absorption test To determine the water absorption of different mortar specimens, three specimens from each mortar were dried at a temperature of 105 o C and its weight determined as initial weight.Then the samples were immersed in water for 24 hours and its saturated surface dry weight was recorded as the final weight.Water absorption of specimens is reported as the percentage increase in weight.It was determined by the formula: Where Mw, is weight of specimen after immersion in water and Md, Weight of specimen after drying. The results of water absorption are presented in Fig. 5. Fig. 5. Effect of substitution rate on water absorption variation When the ratio of replacement of cockle shell in cement increased, there was an increase in water absorption, due to the height porosity of specimens with cockle shell.Until the ratio of 20% where water absorption percentage decreased, this is due to the reduced volume of capillary voids caused by the high compactness of cockle seashell mortar.In fact, in the case of cockle shell, capillary voids in the mortar are more important, which facilitates the development of new products during the hydration of cement. Chloride penetration After 28 days of water curing, specimens 4×4×16 cm, were stored in sodium chloride solution with the concentration of 5%.At certain deadlines (28, 56 and 90 days), the samples are removed from the solution and split in two parts.The depth of chloride penetration is measured on fresh fractures a colored indicator.The aggressive solution was renewed every 28 days. The measurement of the depth of chloride penetration is carried out using a sputtering of the colored indicator: AgNO3 silver nitrate (V) on the faces of the rupture test specimen. After spraying the AgNO3, if chlorides are present in the interstitial phase of the sample, two zones of different colors appear.The zone containing free chlorides (that is to say, soluble in water) will appear in light color and the zone containing none, of dark color.Indeed, when spraying the reagent, the chemical reaction between the chlorides present and the silver is characterized by the appearance of a white precipitate (AgCl).The depth of chloride penetration is therefore the distance between the surface and the line of separation between the two different color areas. The results of depths of chloride penetration are presented in Fig. 6.As the AgNO3 colorimetric indicator allows reliable free-chloride penetration detection for different mortars, we can observe that, there is a difference between the results at the age of 28 days and at the age of 56 days, however there are identical results at 56 and 90 days.CM and mortars 5% of cockle shell have the same depth penetration at 28 days, it means that the addition of 5% of cockle shell did not affect on mortar to resist chloride penetration.When the replacement of 0 0,5 cockle seashell increased, chloride penetration also increased.Beyond 28 days, the evolution of the penetration depth is non-stationary which is defined by a nonlinear variation as shown in Fig. 6 above.At 56 and 90 days mortar with 10% of cockle shell presented the best resistance to penetration that corresponds to the shallower depth of chloride penetration.The depth penetration of chloride decrease with increasing cockle seashell amount until 15%, higher cement consumption means a higher content of C3A available to help in binding chloride ions, this higher binding capacity reduces the amount of free chlorides, which are those that effectively take part in mass transport [17]. The depth of penetration return to increase with 15% of cockle shell, the rate of chloride penetration depends on the porosity of the mortars and the pore size distribution.In addition, the difference between the penetration depths could be due to a different mobility of the chlorides inside the pores following their distribution in the cement matrix.It can be also observed that the depth of chloride penetration increased with the age. Weight loss After 28 days of water curing, the 5×5×5 cm specimens were immersed in two solutions, CH3COOH and H2SO4 with the same concentration 5%.The aggressive solutions were renewed every 28 days.After 2, 7, 28, 56 and 90 days, they were used to estimate the weight loss according to the standard ASTMC267-96 [18] Sulfuric (VI) acid Results of the weight changes for the cockle shell mortars preserved in H2SO4 solution are presented in Fig. 7. In the specimens immersed in 5% sulfuric (VI) acid, a sudden loss of weight was noticed during 2 to 90 days.It can be also noted that in early age the loss weight increased proportionally with cockle shell content, the highest value of weight loss corresponds to specimens with 20% of cockle shell that presented 1.657%, when CM weight loss was 1.184% at the age of 7 days.The weight dropped substantially in all specimens at 28 days.From 56 days specimens with 10, 15 and 20% of cockle shell had the maximum weight loss, the highest value was presented by mortars with 20% cockle shell.However, mortar with 5% of cockle shell exhibited lesser weight loss at all ages until at 90 days where its weight loss was similar to the CM weight loss.The different hydrates stability's is variable.However, the most susceptible component in aggressive attack is Portlandite [19].When the mortars are attacked by sulfuric (VI) acid H2SO4, they react with the Portlandite Ca(OH)2 resulting from the hydration of the cement, which causes the formation of gypsum and. The process is described by the following chemical reactions: Acetic acid CH3COOH The curves, presented by Fig. 8, show the weight loss in % measured at the end of each aging of mortars stored in acetic acid solution CH3COOH (after 28 days of cure in water).Fig. 8 shows that the mass variation is presented by a loss of weight in the acetic acid (CH3COOH).At 2 days it was observed a loss of weight for all the mortars with approximate values, the mortar with 5% cockle shell had a lower weight loss about 0.97% that is the most resistant mortar at an early age.At 7 days the weight loss of CM and mortar with 5% cockle shell became close, the highest loss corresponds to mortar with 20% cockle shell, and it can be also observed that the values are close.The replacement of cockle shell by weight in cement at an early stage affected the hydration reaction and resulted in a lower amount of Portlandite Ca(OH)2.For those reasons, the reaction between acetic acid and calcium hydroxide ions decreased.At the age of 28 days there is an important weight loss corresponds to mortars with 10, 15 and 20% of cockle shell, mortar with 5% cockle shell presented always the lowest loss weight.At 56 days, the value of loss weight of mortar with 5% cockle shell has fallen compared with CM that showed the best results.However, the mortars with 20 and 15% of cockle seashell always have a loss of weight about 8.63 and 8.12% respectively.At the age of 90 days, the weight loss continues to increase, and this development has become proportionally of cockle shell content, the best result corresponds by CM.The increased porosity leads to an increase of the free water in the mortar texture [20]. It is well known that acetic acid is very aggressive compounds [21,22].Their reactivity was explained by the reaction with the Portlandite Ca(OH)2 of the mortar producing very soluble calcium salts [22] which, because of its solubility, are leached away by the aggressive solution.Calcium acetate Ca(CH3COO)2 causes dissolution of cement through the reaction between the Portlandite (calcium hydroxide) and the acetic acid, the reaction was expressed in (3). Calcium acetate The acid attacks the surface layer of the specimens, and this action leads to weight loss of the immersed specimens. CONCLUSION As seen from this study, for the reuse of cockle shell, we can draw the following conclusions: Cockle shell content affects the slump of mortars; the substitution of 10% of cockle shell shows the highest workability.The introduction of cockle seashell tends to accelerate the development kinetics of compressive and flexural strength at a young age.The incorporation of 5% cockle shell has better compressive and flexural strength at a young age (2 and 7 days), behind it has a closed strength with CM at a long-term (28 and 90 days). Compressive and flexural strength are affected in the same way by the substitution of the cockle shell in the cement. The test results indicate that the substitution of 10% of cockle shell into cement reduces the depth penetration at a long-term.This leads to less penetration of chloride ions into the mortar shown in the experiment results. In terms of weight loss due to the effect of H2SO4 acid, specimens with 5% of cockle shell were more resistant to weight loss, beyond this content; the loss will be more severe.The substitution of 5% of the cockle shell in cement can increase the resistance to chemical attack.In CH3COOH acid, the incorporation of cockle in cement tends to increase the weight loss of all mortars.Mortar with 5% of cockle shell shows losses close to the control mortar.CH3COOH and H2SO4 are very harmful for cockle shell mortars.For the same concentration (5%) of acid, the degree of attack tended to be more severe in sulfuric (VI) acid than in acetic acid. Fig. 3 . Fig. 3. Effect of the substitution rate on the compressive strength Fig. 4 . Fig. 4. Effect of the substitution rate on the flexural strength Fig. 6 . Fig. 6.Chloride penetration depth of mortars according to the age of immersion in 5% of NaCl
4,447.8
2019-06-01T00:00:00.000
[ "Materials Science" ]
CPW-Fed Antennas for WiFi and WiMAX Recently, several researchers have devoted large efforts to develop antennas that satisfy the demands of the wireless communication industry for improving performances, especially in term of multiband operations and miniaturization. As a matter of fact, the design and development of a single antenna working in two or more frequency bands, such as in wireless local area network (WLAN) or WiFi and worldwide interoperability for microwave access (WiMAX) is generally not an easy task. The IEEE 802.11 WLAN standard allocates the license-free spectrum of 2.4 GHz (2.40-2.48 GHz), 5.2 GHz (5.15-5.35 GHz) and 5.8 GHz (5.725-5.825 GHz). WiMAX, based on the IEEE 802.16 standard, has been evaluated by companies for last mile connectivity, which can reach a theoretical up to 30 mile radius coverage. The WiMAX forum has published three licenses spectrum profiles, namely the 2.3 (2.3-2.4 GHz), 2.5 GHz (2.495-2.69 GHz) and 3.5 GHz (3.5-3.6 GHz) varying country to country. Many people expect WiMAX to emerge as another technology especially WiFi that may be adopted for handset devices and base station in the near future. The eleven standardized WiFi and WiMAX operating bands are listed in Table I. Introduction Recently, several researchers have devoted large efforts to develop antennas that satisfy the demands of the wireless communication industry for improving performances, especially in term of multiband operations and miniaturization. As a matter of fact, the design and development of a single antenna working in two or more frequency bands, such as in wireless local area network (WLAN) or WiFi and worldwide interoperability for microwave access (WiMAX) is generally not an easy task. The IEEE 802.11 WLAN standard allocates the license-free spectrum of 2.4 GHz (2.40-2.48 GHz), 5.2 GHz (5.15-5.35 GHz) and 5.8 . WiMAX, based on the IEEE 802.16 standard, has been evaluated by companies for last mile connectivity, which can reach a theoretical up to 30 mile radius coverage. The WiMAX forum has published three licenses spectrum profiles, namely the 2.3 (2.3-2.4 GHz), 2.5 GHz (2.495-2.69 GHz) and 3.5 GHz (3.5-3.6 GHz) varying country to country. Many people expect WiMAX to emerge as another technology especially WiFi that may be adopted for handset devices and base station in the near future. The eleven standardized WiFi and WiMAX operating bands are listed in Table I. Consequently, the research and manufacturing of both indoor and outdoor transmission equipment and devices fulfilling the requirements of these WiFi and WiMAX standards have increased since the idea took place in the technical and industrial community. An antenna serves as one of the critical component in any wireless communication system. As mentioned above, the design and development of a single antenna working in wideband or more frequency bands, called multiband antenna, is generally not an easy task. To answer these challenges, many antennas with wideband and/or multiband performances have been published in open literatures. The popular antenna for such applications is microstrip antenna (MSA) where several designs of multiband MSAs have been reported. Another important candidate, which may complete favorably with microstrip, is coplanar waveguide (CPW). Antennas using CPW-fed line also have many attractive features including lowradiation loss, less dispersion, easy integration for monolithic microwave circuits (MMICs) and a simple configuration with single metallic layer, since no backside processing is required for integration of devices. Therefore, the designs of CPW-fed antennas have recently become more and more attractive. One of the main issues with CPW-fed antennas is to provide an easy impedance matching to the CPW-fed line. In order to obtain multiband and broadband operations, several techniques have been reported in the literatures based on CPW-fed slot antennas (Chaimool et al., 2004(Chaimool et al., , 2005(Chaimool et al., , 2008Sari-Kha et al., 2006;Jirasakulporn, www.intechopen.com Advanced Transmission Techniques in WiMAX 20 2008), CPW-fed printed monopole (Chaimool et al., 2009;Moekham et al., 2011) and fractal techniques (Mahatthanajatuphat et al., 2009;Honghara et al., 2011). In this chapter, a variety of advanced CPW-fed antenna designs suitable for WiFi and WiMAX operations is presented. Some promising CPW-fed slot antennas and CPW-fed monopole antenna to achieve bidirectional and/or omnidirectional with multiband operation are first shown. These antennas are suitable for practical portable devices. Then, in order to obtain the unidirectional radiation for base station antennas, CPW-fed slot antennas with modified shape reflectors have been proposed. By shaping the reflector, noticeable enhancements in both bandwidth and radiation pattern, which provides unidirectional radiation, can be achieved while maintaining the simple structure. This chapter is organized as follows. Section 2 provides the coplanar waveguide structure and characteristics. In section 3, the CPW-fed slot antennas with wideband operations are presented. The possibility of covering the standardized WiFi and WiMAX by using multiband CPW-fed slot antennas is explored in section 4. In order to obtain unidirectional radiation patterns, CPWfed slot antennas with modified reflectors and metasurface are designed and discussed in section 5. Finally, section 6 provides the concluding remarks. System Designed Coplanar waveguide structure A coplanar waveguide (CPW) is a one type of strip transmission line defined as a planar transmission structure for transmitting microwave signals. It comprises of at least one flat conductive strip of small thickness, and conductive ground plates. A CPW structure consists of a median metallic strip of deposited on the surface of a dielectric substrate slab with two narrow slits ground electrodes running adjacent and parallel to the strip on the same surface Beside the microstrip line, the CPW is the most frequent use as planar transmission line in RF/microwave integrated circuits. It can be regarded as two coupled slot lines. Therefore, similar properties of a slot line may be expected. The CPW consists of three conductors with the exterior ones used as ground plates. These need not necessarily have same potential. As known from transmission line theory of a three-wire system, even and odd mode solutions exist as illustrated in Fig. 2. The desired even mode, also termed coplanar mode [ Fig. 2 (a)] has ground electrodes at both sides of the centered strip, whereas the parasitic odd mode [ Fig. 2 (b)], also termed slot line mode, has opposite electrode potentials. When the substrate is also metallized on its bottom side, an additional parasitic parallel plate mode with zero cutoff frequency can exist [Fig. 2(c)]. When a coplanar wave impinges on an asymmetric discontinuity such as a bend, parasitic slot line mode can be exited. To avoid these modes, bond wires or air bridges are connected to the ground places to force equal potential. Fig. 3 shows the electromagnetic field distribution of the even mode at low frequencies, which is TEM-like. At higher frequencies, the fundamental mode evolves itself approximately as a TE mode (H mode) with elliptical polarization of the magnetic field in the slots. Wideband CPW-fed slot antennas To realize and cover WiFi and WiMAX operation bands, there are three ways to design antennas including (i) using broadband/wideband or ultrawideband techniques, (ii) using multiband techniques, and (iii) combining wideband and multiband techniques. For wideband operation, planar slot antennas are more promising because of their simple structure, easy to fabricate and wide impedance bandwidth characteristics. In general, the wideband CPW-fed slot antennas can be developed by tuning their impedance values. Several impedance tuning techniques are studied in literatures by varying the slot geometries and/or tuning stubs as shown in Fig. 4 and Fig. 5. Various slot geometries have been carried out such as wide rectangular slot, circular slot, elliptical slot, bow-tie slot, and hexagonal slot. Moreover, the impedance tuning can be done by using coupling mechanisms, namely inductive and capacitive couplings as shown Fig. 5. For capacitively coupled slots, several tuning stubs have been used such as circular, triangular, rectangular, and fractal shapes. In this section, we present the wideband slot antennas using CPW feed line. There are three antennas for wideband operations: CPW-fed square slot antenna using loading metallic strips and a widened tuning stub, CPW-fed equilateral hexagonal slot antennas, and CPW-fed slot antennas with fractal stubs. The geometry and prototype of the proposed CPW-fed slot antenna with loading metallic strips and widen tuning stub is shown in Fig. 6(a) and Fig. 6(b), respectively. The proposed antenna is fabricated on an inexpensive FR4 substrate with thickness (h) of 1.6 mm and relatively permittivity ( r ) of 4.4. The printed square radiating slot has a side length of L out and a width of G. A 50- CPW has a signal strip of width W f , and a gap of spacing g between the signal strip and the coplanar ground plane. The widened tuning stub with a length of L and a width of W is connected to the end of the CPW feed line. Two loading metallic strips of the same dimensions (length of L 1 and width of 2 mm) are designed to protrude from the top comers into the slot center. The spacing between the tuning stub and edge of the ground plane is S. In this design, the dimensions are chosen to be G =72 mm, and L out = 44 mm. Two parameters of the tuning stub including L and W and the length of loading metallic strip (L 1 ) will affect the broadband operation. The parametric study was presented from our previous work (Chaimool, et. al., 2004(Chaimool, et. al., , 2005. The present design is to make the first CPW-fed slot antenna to form a wider operating bandwidth. Firstly, a CPW-fed line is designed with the strip width W f of 6.37 mm and a gap width g of 0.5 mm, corresponding to the characteristic impedance of 50-. The design structure has been obtained with the optimal tuning stub length of L =22.5 mm, tuning stub width W = 36 mm, and length of loading metallic strips L 1 = 16 mm to perform the broadband operation. The proposed antenna has been constructed ( Fig. 6(b)) and then tested using a calibrated vector network analyzer. Measured result of return losses compared with the simulation is shown in Fig. 7. The far-field radiation patterns of the proposed antenna with the largest operating bandwidth using the design parameters of L 1 =16 mm, W = 36 mm, L =22.5 mm, and S = 0.5 mm have been then measured. Fig. 8 shows the plots of the radiation patterns measured in y-z and x-z planes at the frequencies of 1660 and 2800 MHz. It has been found that we can obtain acceptable broadside radiation patterns. This section introduces a new CPW-fed square slot antenna with loading metallic strips and a widened tuning stub for broadband operation. The simulation and experimental results of the proposed antenna show the impedance bandwidth, determined by 10-dB return loss, larger than 67% of the center frequency. The proposed antenna can be applied for WiFi (2.4 GHz) and WiMAX (2.3 and 2.5 GHz bands) operations. The optimal dimensions have been used for building up the proposed antenna. Measured return loss using a vector network analyzer is now shown in Fig.10. As we can see that the measured return loss agrees well with simulation expectation. It is also seen that the proposed antenna has an operational frequency range from 1.657 to 2.956 GHz or bandwidth about 55% of the center frequency measured at higher 10 dB return loss. This section presents design and implementation of the CPW-fed equilateral hexagonal slot antenna. The transmission line and ground-plane have been designed to be on the same plane with the antenna slot to be applicable for wideband operation. It is found that the proposed antenna is accessible to bandwidth about 55.39%, a very large bandwidth comparing with conventional microstrip antennas, which mostly provide 1-5 % bandwidth. The proposed antenna can be used for many wireless systems such as WiFi , WiMAX, GSM1800, GSM1900, and IMT-2000. CPW-fed slot antennas with fractal stubs In this section, the CPW-fed slot antenna with tuning stub of fractal geometry will be investigated. The Minkowski fractal structure will be modified to create the fractal stub of the proposed antenna. The proposed antennas have been designed and fabricated on an inexpensive FR4 substrate of thickness h = 0.8 mm and relative permittivity  r = 4.2. The first antenna consists of a rectangular stub or zero iteration of fractal model (0 iteration), which has dimension of 10 mm × 25 mm. It is fed by 50 CPW-fed line with the strip width and distance gap of 7.2 mm and 0.48 mm, respectively. In the process of studying the fractal geometry on stub, it is begun by using a fractal model to repeat on a rectangular patch stub for creating the first and second iterations of fractal geometry on the stub, as shown in Fig. 11. Then, the fractal stub is connected by 50 CPW-fed line. On the second iteration fractal stub of the antenna, the fraction of size between the center element and four around elements is 1.35 because this value is suitable for completely fitting to connect between the center element and four around elements. As shown in Fig. 12(a), the dimensions of the second iteration antenna are following: W T = 48 mm, L T = 50 mm, W S1 = 39.84 mm, L S1 = 20.6 mm, W S2 = 15.84 mm, L S2 = 19.28 mm, W S3 = 7.42 mm, L S3 = 7.72 mm, W A = 25 mm, L B = 10 mm, W TR = 7.2 mm, and h = 0.8 mm. In order to study the effects of fractal geometry on the stub of the slot antenna, IE3D program is used to simulate the characteristics and frequency responses of the antennas. The simulated return loss results of the 1 st and 2 nd iterations are shown in Fig. 13 and expanded in frequencies decrease as increasing the iteration for fractal stub. Typically, the increasing iteration in the conventional fractal structure affects to the widely bandwidth. However, these results have inverted because the electrical length on the edge of stub, which the stub in the general CPW-fed slot antenna was used to control the higher frequency band, is increased and produced by the fractal geometry. In Table 3, simulation results show the antenna gains at operating frequency of 1.8 GHz, 2.1 GHz, 2.45 GHz, and 3.5 GHz above 3dBi. As the higher operating frequency, the average antenna gains are about 2 dBi. The overall dimension of CPW-fed fabricated slot antennas with fractal stub is 48× 50 × 0.8 mm 3 , as illustrated in Fig. 12(b). The simulated and measured results of the proposed antennas are compared as shown in Fig. 13. It can be clearly found that the simulated and measured results are similarity. However, the measured results of the return loss bandwidth slightly shift to higher frequency band. The error results are occurred due to the problem in fabrication because the fractal geometry stubs need the accuracy shapes. Moreover, the radiation patterns of 0, 1 st and 2 nd iteration stubs of the antennas are similar, which are the bidirectional radiation patterns at two frequencies, 2.45 and 3.5 GHz, as depicted in Fig. 14 This section studies CPW-fed slot antennas with fractal stubs. The return loss bandwidth of the antenna is affected by the fractal stub. It has been found that the antenna bandwidth decreases when the iteration of fractal stub increases, which it will be opposite to the conventional fractal structures. In this study, fractal models with the 0, 1 st and 2 nd iterations have been employed, resulting in the return loss bandwidths to be 121%, 115%, and 82%, respectively. Moreover, the radiation patterns of the presented antenna are still bidirections and the average gains of antenna are above 2 dBi for all of fractal stub iterations. Results indicate an impedance bandwidth covering the band for WiFi, WiMAX, and IMT-2000. Multiband CPW-fed slot antennas Design of antennas operating in multiband allows the wireless devices to be used with only a single antenna for multiple wireless applications, and thus permits to reduce the size of the space required for antenna on the wireless equipment. In this section, we explore the possibility of covering some the standardized WiFi and WiMAX frequency bands while cling to the class of simply-structured and compact antennas. Dual-band CPW-fed slot antennas using loading metallic strips and a widened tuning stub In this section, we will show that CPW-fed slot antennas presented in the previous section (Section 3.1) can also be designed to demonstrate a dual-band behavior. The first dual-band antenna topology that, we introduce in Fig. 15(a); consists of the inner rectangular slot antenna with dimensions of w in ×L in and the outer square slot (L out ×L out ). The outer square slot is used to control the first or lower operating band. On the other hand, the inner slot of width is used to control the second or upper operating band. The second antenna as shown in Fig. 15(b) combines a tuning stub with dimensions of W s ×L 3 placed in the inner slot at its bottom edge. The tuning stub is used to control coupling between a CPW feed line and the inner rectangular slot. In the third antenna as shown in Fig. 15(c), another pair of loading metallic strips is added at the bottom inner slot corners with dimensions of 1 mm×L 2 . Referring to Fig. 15(a), if adding a rectangular slot at tuning stub with w in = 21 mm and L in = 11 mm to the wideband antenna ( Fig. 6(a)), an additional resonant mode at about 5.2 GHz is obtained. This resonant mode excited is primarily owing to an inner rectangular slot. This way the antenna becomes a dual-band one in which the separation between the two resonant frequencies is a function of the resonant length of the second resonant frequency, the length and width of the inner slot (L in and w in ). To achieve the desired dual band operation of the rest antennas, we can adjust the parameters, (W, L, L 1 ) and (w in , W s , L 2 , L 3 , L in ), of the outer and inner slots, respectively, to control the lower and upper operating bands of the proposed antennas. The measured return losses of the proposed antennas are shown in Fig. 16. It can be observed that the multiband characteristics can be obtained. The impedance bandwidths of the lower band for all antennas are slightly different, and on the other hand, the upper band has an impedance bandwidth of 1680 MHz (4840-6520 MHz) for antenna in Fig. 15(b), which covers the WiFi band at 5.2 GHz and 5.8 GHz band for WiMAX. To sum up, the measured results and the corresponding settings of the parameters are listed Table 4. Radiation patterns of the proposed antennas were measured at two resonant frequencies. Fig. 17(a) and (b) show the y-z and x-z plane co-and cross-polarized patterns at 1700 and 5200 MHz, respectively. The radiation patterns are bidirectional on the broadside due to the outer slot mode at lower frequency and the radiation patterns are irregular because of the excitation of higher order mode, the traveling wave. By inserting a slot and metallic strips at the widened stub in a single layer and fed by coplanar waveguide (CPW) transmission line, novel dual-band and broadband operations are presented. The proposed antennas are designed to have dual-band operation suitable for applications WiFi (2.4 and 5 GHz bands) and WiMAX (2.3, 2.5 and 5.8 bands) bands. The dual-band antennas are simple in design, and the two operating modes of the proposed antennas are associated with perimeter of slots and loading metallic strips, in which the lower operating band can be controlled by varying the perimeters of the outer square slot and the higher band depend on the inner slot of the widened stub. The experimental results of the proposed antennas show the impedance bandwidths of the two operating bands, determined from 10-dB return loss, larger than 61% and 27% of the center frequencies, respectively. Based on the antenna parameters and the ground plane size depicted in Fig. 18, a prototype of this antenna was designed, fabricated and tested as shown in Fig. 19. Fig. 20 shows the measured return loss for the tri-band antenna. It is clearly seen that four resonant modes are excited at the frequencies of 2.59, 3.52, 5.56 and 6.37 GHz that results in three distinct bands. It is worthy of note that the latter two resonant modes are deliberately made in merge as a single wideband in order to cover all the unlicensed bands from 5.15 GHz to 5.85 GHz. The obtained 10-dB impedance bandwidths are 600 MHz (2.27-2.87 GHz), 750 MHz (3.4-4.15 GHz) and 1590 MHz (5.11-6.7 GHz), corresponding to the 23%, 20%, and 27%, respectively. CPW-fed mirrored-L monopole antenna with distinct triple bands Obviously, the achieved bandwidths not just cover the WiFi bands of 2.4 GHz (2.4-2.484 GHz) and 5.2 GHz (5.15-5.25 GHz), but also the licensed WiMAX bands of 2.5 GHz (2.5-2.69 GHz) and 3.5 GHz (3.4 -3.69 GHz). Fig. 20 shows the measured gains compared to the simulated result for all distinct bands. For the first two bands, gains are slightly decreased with frequency increases, whereas the gains in the upper band are fallen in with the simulation. The radiation characteristics have also been investigated and the measured patterns in two cuts (x-y plane, x-z plane) at 2.59, 3.52, and 5.98 GHz are plotted in Figs. 21(a), 21(b) and 21(c), respectively. As expected, the very good omni-directional patterns are obtained for all frequency bands in the x-y plane, whilst the close to bi-directional patterns in the x-z plane are observed. Multiband antenna with modified fractal slot fed by CPW In this section, a fractal slot antenna fed by CPW was created by applying the Minkowski fractal concept to generate the initial generator model at both sides of inner patch of the antenna, as shown in Fig. 23. The altitude of initial generator model as shown in Fig. 24 varies with W p . Usually, W p is smaller than W s /3 and the iteration factor is  = 3Wp/Ws; 0 <  < 1. Normally, the appropriated value of iteration factor  = 0.66 was used to produce the fractal slot antenna. The configuration of the proposed antenna, as illustrated in Fig. 23, is the modified fractal slot antenna fed by CPW. The antenna composes of the modified inner metallic patch, which is fed by a 50-CPW line with a strip width W f and gap g 1 , and an outer metallic patch. In the section, the antenna is fabricated on an economical FR4 dielectric substrate with a thickness of 1.6 mm ( The prototype of the proposed antenna is shown in Fig. 23(b). The simulated and measured return losses of the antenna are illustrated in Fig. 25. It is clearly observed that the measured return loss of the antenna slightly shifts to the right because of the inaccuracy of the manufacturing process by etching into chemicals. However, the measured result of proposed antenna still covers the operating bands of 1.71-1.88 GHz and 3.2-5.5 GHz for the applications of DCS 1800, WiMAX (3.3 and 3.5 bands), and WiFi (5.5 GHz band). This section presents a multiband slot antenna with modifying fractal geometry fed by CPW transmission line. The presented antenna has been designed by modifying an inner fractal patch of the antenna to operate at multiple resonant frequencies, which effectively supports the digital communication system (DCS1800 1.71-1.88 GHz), WiMAX (3.30-3.80 GHz), and WiFi (5.15-5.35 GHz). Manifestly, it has been found that the radiation patterns of the presented antenna are still similarly to the bidirectional radiation pattern at all operating frequencies. Unidirectional CPW-fed slot antennas From the previous sections, most of the proposed antennas have bidirectional radiation patterns, with the back radiation being undesired directions but also increases the sensitivity of the antenna to its surrounding environment and prohibits the placement of such slot antennas on the platforms. A CPW-fed slot antenna naturally radiates bidirectionally, this characteristic is necessary for some applications, such as antennas for roads. However, this inherent bidirectional radiation is undesired in some wireless communication applications such as in base station antenna. There are several methods in order to reduce backside radiation and increase the gain. Two common approaches are to add an additional metal reflector and an enclosed cavity underneath the slot to redirect radiated energy from an undesired direction. In this section, promising wideband CPW-fed slot antennas with unidirectional radiation pattern developed for WiFi and WiMAX applications are presented. We propose two techniques for redirect the back radiation forward including (i) using modified the reflectors placed underneath the slot antennas ( Fig. 26(a)) and (ii) the new technique by using the metasurface as a superstrate as shown in Fig 26(b). Fig. 26. Arrangement of unidirectional CPW-fed slot antennas (a) conventional structure using conductor-back reflector and (b) the proposed structure using metasurface superstrate Wideband unidirectional CPW-fed slot antenna using loading metallic strips and a widened tuning stub The geometry of a CPW-fed slot antennas using loading metallic strips and a widened tuning stub is depicted in Fig. 27(a). Three different geometries of the proposed conducting reflector behind CPW-fed slot antennas using loading metallic strips and a widened tuning stub are shown in Figs. 27(b), (c), and (d). It comprises of a single FR4 layer suspended over a metallic reflector, which allows to use a single substrate and to minimize wiring and soldering. The antenna is designed on a FR4 substrate 1.6 mm thick, with relative dielectric constant ( r ) 4.4. This structure without a reflector radiates a bidirectional pattern and maximum gain is about 4.5 dBi. The first antenna, Fig. 27(b), is the antenna located above a flat reflector, with a reflector size 100×100 mm 2 . The -shaped reflector with the horizontal www.intechopen.com plate is a useful modification of the corner reflector. To reduce overall dimensions of a large corner reflector, the vertex can be cut off and replaced with the horizontal flat reflector (Wc1×Wc3). The geometry of the proposed wideband CPW-fed slot antenna using -shaped reflector with the horizontal plate is shown in Fig. 27(c). The -shaped reflector, having a horizontal flat section dimension of W c1 ×W c3 , is bent with a bent angle of . The width of the bent section of the -shaped reflector is W c2 . The distance between the antenna and the flat section is h c . For the last reflector, we modified the conductor reflector shape. Instead of the -shaped reflector, we took the conductor reflector to have the form of an inverted shaped reflector. The geometry of the inverted -shaped reflector with the horizontal plate is shown in Fig. 27(d). The inverted -shaped reflector, having a horizontal flat section dimension of W d1 ×W d3 , is bent with a bent angle of . The width of the bent section of the inverted -shaped reflector is W d2 . The distance between the antenna and the flat section is h d . Several parameters have been reported in (Akkaraekthalin et al., 2007). Fig. 28. Fig. 29 shows the measured return losses of the proposed antenna. The 10-dB bandwidth is about 69% (1.5 to 3.1 GHz) of 72DegAnt. A very wide impedance bandwidth of 73% (1.5 -3.25 GHz) for the antenna of 90DegAnt was achieved. The last, impedance bandwidth is 49% (1.88 to 3.12 GHz) when the antenna is 120DegAnt as shown in Fig. 29. However, from the obtained results of the three antennas, it is clearly seen that the broadband bandwidth for PCS/DCS/IMT-2000 WiFi and WiMAX bands is obtained. The radiation characteristics are also investigated. Fig. 30 presents the measured far-field radiation patterns of the proposed antennas at 1800 MHz, 2400 MHz, and 2800 MHz. As expected, the reflectors allow the antennas to radiate unidirectionally, the antennas keep the similar radiation patterns at several separated selected frequencies. The radiation patterns are stable across the matched frequency band. The main beams of normalized H-plane patterns at 1.8, 2.4, and 2.8 GHz are also measured for three different reflector shapes as shown in Fig. 31. Finally, the measured antenna gains in the broadside direction is presented in Fig. 32. For the 72DegAnt, the measured antenna gain is about 7.0 dBi over the entire viable frequency band. As shown, the gain variations are smooth. The average gains of the 90DegAnt and 120DegAnt over this bandwidth are 6 dBi and 5 dBi, respectively. This is due to impedance mismatch and pattern degradation, as the back radiation level increases rapidly at these frequencies. The radiator is center-fed inductively coupled slot, where the slot has a length (L-W f ) and width W. A 50- CPW transmission line, having a signal strip of width W f and a gap of distance g, is used to excite the slot. The slot length determines the resonant length, while the slot width can be adjusted to achieve a wider bandwidth. The antenna is printed on 1.6 mm thick (h 1 ) FR4 material with a dielectric constant ( r1 ) of 4.2. For the metasurface as shown in Fig. 33(b), it comprises of an array 4×4 square loop resonators (SLRs). It is printed on an inexpensive FR4 substrate with dielectric constant  r2 = 4.2 and thickness (h 2 ) 0.8 mm. The physical parameters of the SLR are given as follows: P = 20 mm, a = 19 mm and b= 18 mm. To validate the proposed concept, a prototype of the CPW-fed slot antenna with metasurface was designed, fabricated and measured as shown in Fig. 34 (a). The metasurface is supported by four plastic posts above the CPW-fed slot antenna with h a = 6.0 mm, having dimensions of 108 mm´108 mm (0.86 0 ´0.86 0 ). Simulations were conducted by using IE3D simulator, a full-wave moment-ofmethod (MoM) solver, and its characteristics were measured by a vector network analyzer. The S 11 obtained from simulation and measurement of the CPW-fed slot antenna with metasurface with a very good agreement is shown in Fig. 34 (b). The measured impedance bandwidth (S 11 ≤ -10 dB) is from 2350 to 2600 MHz (250 MHz or 10%). The obtained bandwidth covers the required bandwidth of the WiFi and WiMAX systems (2300-2500 MHz). Some errors in the resonant frequency occurred due to tolerance in FR4 substrate and poor manufacturing in the laboratory. Corresponding radiation patterns and realized gains of the proposed antenna were measured in the anechoic antenna chamber located at the Rajamangala University of Technology Thanyaburi (RMUTT), Thailand. The measured radiation patterns at 2400, 2450 and 2500 MHz with both co-and cross-polarization in E-and H-planes are given in Fig. 35 and 36, respectively. Very good broadside patterns are observed and the cross-polarization in the principal planes is seen to be than -20 dB for all of the operating frequency. The front-to-back ratios FBRs were also measured. From measured results, the FBRs are more than 15 and 10 dB for E-and H-planes, respectively. Moreover, the realized gains of the CPW-fed slot antenna with and without the metasurface were measured and compared as shown in Fig. 37. The gain for absence metasurface is about 1.5 dBi, whereas the presence metasurface can increase to 8.0 dBi at the center frequency. An improvement in the gain of 6.5 dB has been obtained. It is obtained that the realized gains of the present metasurface are all improved within the operating bandwidth. Conclusions In this chapter, we have introduced wideband CPW-fed slot antennas, multiband CPW-fed slot and monopole antennas, and unidirectional CPW-fed slot antennas. For multiband operation, CPW-fed multi-slots and multiple monopoles are presented. In addition to, the CPW-fed slot antenna with fractal tuning stub is also obtained for multiband operations. Some WiFi or WiMAX applications such as point-to-point communications require the unidirectional antennas. Therefore, we also present the CPW-fed slot antennas with unidirectional radiation patterns by using modified reflector and metasurface. Moreover, all
7,439.6
2012-01-18T00:00:00.000
[ "Engineering", "Computer Science" ]
Expectation Propagation for Poisson Data The Poisson distribution arises naturally when dealing with data involving counts, and it has found many applications in inverse problems and imaging. In this work, we develop an approximate Bayesian inference technique based on expectation propagation for approximating the posterior distribution formed from the Poisson likelihood function and a Laplace type prior distribution, e.g., the anisotropic total variation prior. The approach iteratively yields a Gaussian approximation, and at each iteration, it updates the Gaussian approximation to one factor of the posterior distribution by moment matching. We derive explicit update formulas in terms of one-dimensional integrals, and also discuss stable and efficient quadrature rules for evaluating these integrals. The method is showcased on two-dimensional PET images. Introduction The Poisson distribution is widely employed to describe inverse and imaging problems involving count data, e.g., emission computed tomography [43,39], including positron emission tomography and single photon emission computed tomography. The corresponding likelihood function is a Poisson distribution with its parameter given by an affine transform (followed by a suitable link function). Over the past few decades, the mathematical theory and numerical algorithms for image reconstruction with Poisson data have witnessed impressive progresses. We refer interested readers to [21] for a comprehensive overview on variational regularization techniques for Poisson data and [3] for mathematical modeling and numerical methods for Poisson data. To cope with the inherent ill-posed nature of the imaging problem, regularization plays an important role. This can be achieved implicitly via early stopping during an iterative reconstruction procedure (e.g., EM algorithm or Richardson-Lucy iterations) or explicitly via suitable penalties, e.g., Sobolev penalty, sparsity and total variation. The penalized maximum likelihood (or equivalently maximum a posteriori (MAP)) is currently the most popular way for image reconstruction with Poisson models [11,40]. However, these approaches can only provide point estimates, and the important issue of uncertainty quantification, which provides crucial reliability assessment on point estimates, is not fully addressed. The Bayesian approach provides a principled yet very flexible framework for uncertainty quantification of inverse and imaging problems [26,41]. The prior distribution acts as a regularizer, and the ill-posedness of the imaging problem is naturally dealt with. Due to the imprecise prior knowledge of the solution and the presence of the data noise, the posterior distribution contains an ensemble of inverse solutions consistent with the observed data, which can be used to quantify the uncertainties associated with a point estimator, via, e.g., credible interval or highest probability density regions. For imaging problems with Poisson data, a full Bayesian treatment is challenging, due to the nonnegativity constraint and high-dimensionality. There are several possible strategies. One idea is to use general-purposed sampling methods to explore the posterior state space, predominantly Markov chain Monte Carlo (MCMC) methods [31,36]. Then the constraints on the signal can be incorporated directly by discarding samples violating the constraint. However, in order to obtain accurate statistical estimates, sampling methods generally require many samples and thus tend to suffer from high computational cost, due to the high problem dimensionality. Further, the MCMC convergence is challenging to diagnose. These observations have motivated intensive research works on developing approximate inference techniques (AITs). In the machine learning literature, a large number of AITs have been proposed, e.g., variational inference [25,5,8,23,2], expectation propagation [33,32] and more recently Bayesian (deep) neural network [16]. In all AITs, one aims at finding an optimal approximate yet tractable distribution within a family of parametric/nonparametric probability distributions (e.g., Gaussian), by minimizing the error in a certain probability metric, prominently the Kullback-Leibler divergence. Empirically they can often produce reasonable approximations but at a much reduced computational cost than MCMC. However, there seem no systematic strategies for handling constraints in these approaches. For example, a straightforward truncation of the distribution due to the constraint often leads to elaborated distributions, e.g., truncated normal distribution, which tends to make the computation tedious or even completely intractable in variational Bayesian inference. In this work, we develop a computational strategy for exploring the posterior distribution for Poisson data (with two popular nonnegativity constraints) with a Laplace type prior based on expectation propagation [33,32], in order to deliver a Gaussian approximation. Laplace prior promotes the sparsity of the image in a transformed domain, which is a valid assumption on most natural images. The main contributions of the work are as follows. First, we derive explicit update formulas in terms of one-dimensional integrals. It essentially exploits the rank-one projection form of the factors to reduce the intractable high-dimensional integrals to tractable one-dimensional ones. In this way, we arrive at two approximate inference algorithms, parameterized by either moment or natural parameters. Second, we derive stable and efficient quadrature rules for evaluating the resulting one-dimensional integrals, i.e., a recursive scheme for Poisson sites with large counts and an approximate expansion for Laplace sites, and discuss different schemes for the recursion, dependent of the integration interval, in order to achieve good numerical stability. Last, we illustrate the approach with comprehensive numerical experiments with the posterior distribution formed by Poisson likelihood and an anisotropic total variation prior, clearly showcasing the feasibility of the approach. Last, we put the work in the context of Bayesian analysis of Poisson data. The predominant body of literature in statistics employs a log link function, commonly known as Poisson regression in statistics and machine learning (see, e.g., [7,2]). This differs substantially from the one frequently arising in medical imaging, e.g., positron emission tomography, and in particular the crucial nonnegativity constraint becomes vacuous. The only directly relevant work we are aware of is the recent work [27]. The work [27] discussed a full Bayesian exploration with EP, by modifying the posterior distributions using a rectified linear function on the transformed domain of the signal, which induces singular measures on the region violating the constraint. However, the work [27] does not consider the background. The rest of the paper is organized as follows. In Section 2 we describe the posterior distribution for the Poisson likelihood function and a Laplace type prior. Then we give explicit expressions of the integrals involved in EP update and describe two algorithms in Section 3. In Section 4 we present stable and efficient numerical methods for evaluating one-dimensional integrals. Last, in Section 5 we present numerical results for two-dimensional inverse problems. In the appendices, we describe two useful parameterizations of a Gaussian distribution, Laplace approximation and additional comparative numerical results for a one-dimensional problem with MCMC and Laplace approximation. Problem formulation In this part, we give the Bayesian formulation for Poisson data, i.e., the likelihood function p(y|x) and prior distribution p(x), and discuss the nonnegativity constraint. Let x ∈ R n be the (unknown) signal/image of interest, y ∈ R m1 + be the observed Poisson data, be the forward map, where the superscript t denotes matrix / vector transpose. The entries of the matrix A are assumed to be nonnegative. For example, in emission computed tomography, it can be a discrete analogue of Radon transform, or probabilistically, the entry a ij of the matrix A denotes the probability that the ith sensor pair records the photon emitted at the jth site. The conditional probability density p(y i |x) of observing y i ∈ N given the signal x is given by + is the background. That is, the entry y i follows a Poisson distribution with a parameter a t i x + r i . The Poisson model of this form is popular in the statistical modeling of inverse and imaging problems involving counts, e.g., positron emission tomography [43]. If the entries of y are independent and identically distributed (i.i.d.), then the likelihood function p(y|x) is given by Note that the likelihood function p(y|x) is not well-defined for all x ∈ R n , and suitable constraints on x are needed in order to ensure the well-definedness of the factors p(y i |x)'s. In the literature, there are three popular constraints: Since the entries of A are nonnegative, there holds C 1 ⊂ C 2 ⊂ C 3 . In practice, the first assumption is most consistent with the physics in that it reflects the physical constraint that emission counts are non-negative. The last two assumptions were proposed to reduce positive bias in the cold region [29], i.e., the region that has zero count. In this work, we shall focus on the last two constraints. The constraints C 2 and C 3 can be unified, which is useful for the discussions below. Definition 2.1. For each likelihood factor p(y i |x) with the constraint C 2 , let For each likelihood factor p(y i |x) with the constraint C 3 , let Then the constraints C 2 and C 3 are both given by V + = ∩ i V + i and V − = R n \V + . With the indicator function 1 V + (x) of the set V + , we modify the likelihood function p(y|x) by This extends the domain of p(y|x) from V + to R n , and it facilitates a full Bayesian treatment. Since the indicator function 1 V + (x) admits a separable form, i.e., To fully specify the Bayesian model, we have to stipulate the prior p(x). We focus on a Laplace type prior. Let L ∈ R m2×n and L t i ∈ R n×1 be the ith row of L. Then a Laplace type prior p(x) is given by The parameter α > 0 determines the strength of the prior, playing the crucial role of a regularization parameter in variational regularization [22]. The choice of the hyperparameter α in the prior p(x) is notoriously challenging [22]. One may apply hierarchical Bayesian modeling in order to estimate it from the data simultaneously with q(x) [45,24,2]. The prior p(x) is commonly known as a sparsity prior (in the transformed domain), which favors a candidate with many small elements and few large elements in the vector Lx. The canonical total variation prior is recovered when the matrix L computes the discrete gradient. It is well known that the total variation penalty can preserve well edges in the image/signals, and hence it has been very popular for various imaging tasks [37,9]. By Bayes' formula, we obtain the Bayesian solution to the Poisson inverse problem, i.e., the posterior probability density function: The computation of Z is generally intractable for high-dimensional problems, and p(x|y) has to be approximated. Approximate inference by expectation propagation In this section, we describe the basic concepts and algorithms of expectation propagation (EP), for exploring the posterior distribution (2.1). EP due to Minka [33,32] is a popular variational type approximate inference method in the machine learning literature. It is especially suitable for approximating a distribution formed by a product of functions, with each factor being of projection form. Since its first appearance in 2001, EP has found many successful applications in practice, and it is reported to be very accurate, e.g., for Gaussian processes [35], and electrical impedance tomography with sparsity prior [18]. However, the theoretical understanding of EP remains quite limited [13,12]. EP looks for an approximate Gaussian distribution q(x) to a target distribution by means of an iterative algorithm. It relies on the following factorization of the posterior distribution p(x|y) (with m = m 1 + m 2 being the total number of factors): Note that each factor t i (x) is a function defined on the whole space R n . Likewise, we denote the Gaussian approximation q(x) to the posterior distribution p(x|y) by with each factort i (x) being a Gaussian distribution N (x|µ i , C i ), andZ is the corresponding normalizing constant. Below we use two different parameterizations of a Gaussian distribution, i.e., moment parameters (mean and covariance) (µ, C) and natural parameters (h, Λ); see Appendix A. Both parameterizations have their pros and cons: the moment one does not require solving linear systems, and the natural one allows singular Gaussians fort i (x). Reduction to one-dimensional integrals There are two main steps of one EP iteration: (a) forming a tilted distributionq i (x), and (b) updating the Gaussian approximation q(x) by matching its moments with that ofq i (x). The moment matching step can be interpreted as minimizing Kullback-Leilber divergence KL(q i ||q) [33,32,18]. Recall that the Kullback-Leibler divergence from one probability distribution p(x) to another q(x) is defined by [28] By Jensen's inequality, the divergence D KL (p||q) is always nonnegative, and it vanishes if and only if p(x) = q(x) almost everywhere. The task at step (a) is to construct the ith tilted distributionq i (x). Let q \i (x) be the ith cavity distribution, i.e., the product of all but the ith factor, and defined by , whose moment and natural parameters are denoted by (µ \i , C \i ) and (h \i , Λ \i ), respectively. Then the ith tilted distributionq i (x) of the approximation q(x) is given byq dx is the corresponding normalizing constant. With the exclusioninclusion step, one replaces the ith factort i (x) in the approximation q with the exact one t i (x). The task at step (b) is to compute moments of the ith tilde distributionq i (x), which are then used to update the approximation q(x). This requires integration over R n , which is generally numerically intractable, ifq i (x) were arbitrary. Fortunately, each factor t i (x) in (3.1) is of projection form and depends only on the scalar u t x, with the vector u ∈ R n being either a i or L i . This is the key fact to render relevant high-dimensional integrals numerically tractable. Below we write the factor t i (x) as t i (u t i x) and accordingly, the ith cavity functionq i (x) aŝ upon replacing j =it i (x) with its normalized version N (x|µ \i , C \i ), and accordingly the normalizing constantẐ i . Since a Gaussian is determined by its mean and covariance, it suffices to evaluate the 0th to 2nd moments ofq i (x). The projection form of the factor t i allows reducing the moment evaluation ofq i (x) to 1D integrals. Theorem 3.1 gives the update scheme for q(x) fromq i (x). Then with the auxiliary variabless ∈ R and C s defined bȳ are given respectively by Similarly, the precision mean hq i and precision Λq i are given respectively by Proof. The expressions forẐ i , µ and C were given in [18,Section 3]. Thus it suffices to derive the formulas for (h, Λ). Recall the Sherman-Morrison formula [20, p. 65]: for any invertible B ∈ R n×n , u, v ∈ R n , there holds Then the precision matrix Λ is given by Similarly, the precision mean h := Λµ is given by This completes the proof of the theorem. In both approaches, the 1D integrals (Z s ,s, C s ) are needed, which depend on u t i µ \i and u t i C \i u i . A direct approach is first to downdate (the Cholesky factor of) Λ and then to solve a linear system. In practice, this can be expensive and the cost can be mitigated. Indeed, they can be computed without the downdating step; see Lemma 3.1 below. Below we use the super-or subscript n and o to denote a variable updated at current iteration from that of the last iteration. be the natural parameter of q(x) and (λ 1,i , λ 2,i ) be defined in Theorem 3.1. Then the mean u t i µ \i and variance u t i C \i u i of the Gaussian distribution N (s|u t i µ \i , u t i C \i u i ) are respectively given by Proof. We suppress the sub/superscript o. By the definition of u t i C \i u i and the Sherman-Morrison formula (3.3), we have and similarly, we have This completes the proof of the lemma. Since the quantities for the 1D integrals can be calculated from variables updated in the last iteration, it is unnecessary to form cavity distributions. Indeed, the cavity precision is formed by Λ \i = Λ o −λ o 2,i u i u t i , and the updated precision is given by Λ n = Λ \i + λ n 2,i u i u t i ; and similarly for h. Thus, we can update Λ directly with (λ o 2,i , λ n 2,i ) and h with (λ o 1,i , λ n 1,i ); this is summarized in the next remark. Remark 3.1. The differences λ n k,i − λ o k,i , k = 1, 2, can be used to update the natural parameter (h, Λ): Moreover, the sign of λ n 2,i − λ o 2,i determines whether to update or downdate the Cholesky factor of Λ. Update schemes and algorithms Now we state the direct update scheme, i.e. without explicitly constructing the intermediate cavity distribution q \i (x), for both natural and moment parameterizations. Let (h, Λ) and (µ, C) be the natural and moment parameters of the Gaussian approximation q(x), respectively. The following update schemes hold. (i) The precision mean h and precision Λ can be updated by (ii) The mean µ and covariance C can be updated by Proof. The first assertion is direct from Theorem 3.1 and Remark 3.1, and it can be rewritten as By Sherman-Morrison formula (3.3), the covariance C n = Λ −1 n is given by where the scalar η 2 := −( where the second identity follows from Remark 3.1. Similarly, the mean µ n := C n h n is given by where, in view of Remark 3.1, This completes the proof of the theorem. All matrix operations in Theorem 3.2 are of rank one type, which can be implemented stably and efficiently with the Cholesky factors and their update / downdate; see Section 3.3 for details. Thus, in practice, we employ Cholesky factors of the precision Λ and covariance C, denoted by Λ chol and C chol , respectively, instead of Λ and C. Further, we also use the auxiliary variables (λ 1,i , λ 2,i ) defined in Theorem 3.1, and stack {(λ 1,i , λ 2,i )} m1+m2 i=1 into two vectors which are initialized to zeros. Thus, we obtain two inference procedures for Poisson data with a Laplace type prior in Algorithms 1 and 2. The rigorous convergence analysis of EP is outstanding. Nonetheless, empirically, it often converges very fast, which is also observed in our numerical experiments in Section 5. In practice, one can terminate the iteration by monitoring the relative change of the parameters or fixing the maximum number K of iterations. The important task of computing 1D integrals will be discussed in Section 4 below. Randomly choose an index i to update; 5: Compute the mean and variance for 1D Gaussian integral by Lemma 3.1; 9: Check the stopping criterion. Randomly choose an index i to update; 5: Compute the mean and variance for 1D Gaussian integral by Lemma 3.1; Efficient implementation and complexity estimate The rank-one matrix update A ± βuu t , for A ∈ R n×n , u ∈ R n and β > 0, can be stably and efficiently updated / downdated with the Cholesky factor of A with √ βu. The update step of A can be viewed as an iteration from A k to A k+1 . Let the upper triangular matrices R k and R k+1 be the Cholesky factors of A k and A k+1 respectively, i.e., A k = R t k R k and A k+1 = R t k+1 R k+1 . There are two possible cases: The update/downdate is available in several packages. For example, in MATLAB, the function cholupdate implements the update/downdate of Cholesky factors, based on LAPACK subroutines ZCHUD and ZCHDD. Next, we discuss the computational complexity per inner iteration. The first step picks one index i, which is of constant complexity. For the second step, i.e., computing the mean and variance for 1D integrals, the dominant part is linear solve involving upper triangular matrices and matrix-vector product for natural and moment parameters. For either parameterization, it incurs O(n 2 ) operations. The third step computess and C s from the one dimensional integrals. For Poisson site, the complexity is O(y i ), and for Laplace site, it is O(1). Last, the fourth step is dominated by Cholesky factor modifications, and its complexity is O(n 2 ). Overall, the computational complexity per inner iteration is O(n 2 + y i ). In a large data setting, y i n, and thus the complexity is about O(n 2 ). In passing, we note that in practice, the covariance / precision matrix may admit additional structures, e.g., sparsity, which translate into structures on Cholesky factors. For the general sparsity assumption, it seems unclear how to effectively exploit it for Cholesky update/downdate for enhanced efficiency, except the diagonal case, which can be incorporated into the algorithm straightforwardly. Stable evaluation of 1d integrals Now we develop a stable implementation for the three 1D integrals: Z s ,s and C s in Theorem 3.1. These integrals form the basic components of Algorithms 1 and 2, and their stable, accurate and efficient evaluation is crucial to the performance of the algorithms. By suppressing the subscript i, we can write the integrals in a unified way: where the factor t(s) is either Poisson likelihood or Laplace prior. Then we can expresss and C s in terms of J j bys Note that the normalizing constants in J j cancel out ins and C s , and thus they can be ignored when evaluating the integrals. In essence, the computation boils down to stable evaluation of moments of a (truncated) Gaussian distribution. This task was studied in several works [10,38]: [10] focuses on Gaussian moments, and [38] discusses also evaluating the integrals involving Laplace distributions. Below we derive the formulas for the (constrained) Poisson likelihood and Laplace prior separately. Poisson likelihood Throughout, we suppress the subscript i, write V + etc in place of V + i etc and introduce the scaler variable s = a t x. Then the constraint on x transfers to that on s: a t x > 0 corresponds to s > 0 and a t x + r > 0 to s > −r, respectively. We shall slightly abuse the notation and use 1 V+ (s) as the indicator for the constraint on s. Then the Poisson likelihood t(x) can be equivalently written in either x or s as t(x) = (a t x + r) y e −(a t x+r) y! 1 V+ (x) and t(s) = (s + r) y e −(s+r) y! 1 V+ (s). Note that the factorial y! cancels out when computings and C s , so it is omitted in the derivation below. For a fixed N (s|m, σ 2 ), the integrals J y,j depend on the observed count data y and moment order j: where the lower integral bound b = 0 or b = −r, which is evident from the context. Note that the terms e −(s+r) and N (s|m, σ 2 ) in J y,j together give an unormalized Gaussian density. This allows us to reduce the integrals J y,j into (truncated) Gaussian moment evaluations of the type: and accordinglys and C s . This is given in the next result. The desired identities follow from the definitions and the recursions in (4.1) bȳ This completes the proof. However, directly evaluating I y can still be numerically unstable for large y. To avoid the potential instability, we develop a stable recursive scheme on I y . Lemma 4.1. For y ≥ 2, the following recursion holds . The definition of I y implies Next we employ the trivial identity d ds f (s) = − s−c d f (s) and apply integration by parts to the first term Collecting the terms shows the desired recursion on the integral I y . Lemma 4.1 uses a two-term linear recurrence relation for I y 's. The coefficients of I y−1 and I y−2 are raised by power when expanding I y in terms of I 0 and I 1 , and thus the computation of I y is susceptible to the evaluation errors of I 0 and I 1 for large y. This motivates a reciprocal recursive scheme by introducing a ratio sequence {L y } y defined by L y = yIy−1 Iy , for r = 0 or b = −r, in order to restore the numerical stability. Note that L y also admits a recursive scheme L y = y (m−σ 2 +r)+σ 2 Ly−1 , and further I y can be recovered from {L y } by ln I y = ln y! + ln I 0 − y i=1 L i . We can computes and C s directly from L y . The identities follow from straightforward computation. Last, we discuss the computation of the first three integrals I 0 , I 1 and I 2 , which are needed for the recursion. We employ three different forms according to the integration range with respect to the auxiliary variable The formulas are listed in Table 1, where erf and erfc denote the error function and complementary error function, respectively, and erfcx(η) = e η 2 (1 − erf(η)). Since the value of 1 − erf(η) is vanishingly small for large η value, we use Scheme 2 to avoid underflow. Scheme 3 is useful when the η value is large, since both 1 − erf(η) and erfc(η) suffer from numerical underflow. Note that when η is small, Scheme 3 is not as accurate as Scheme 2, so we use Scheme 2 in the intermediate range. Laplace potential Now we derive the formulas for evaluating the 1D integrals for the Laplace potential t(x) = α 2 e −α| t x| . For any fixed ∈ R n , we divide the whole space R n into two disjoint half-spaces V + and V − , i.e., R n = V + ∪V − , with V + = {x| t x > 0} and V − = {x| t x ≤ 0}. Then we split the Laplace potential t(x) into The integrals involving t(x)N (s|µ, σ 2 ) (slightly abusing µ) can be divided into two parts: By the change of variable t = s−µ±ασ 2 σ for I ± i respectively, we have These integrals can be expressed using the cumulative distribution function Φ of the standard Gaussian distribution. We shall view I ± i as functions of µ and let I i = I + i (µ) + (−1) i I + i (−µ). Then we havē s = I 1 I 0 and C s = To avoid the potential underflow of direct evaluation of Φ, we use the following well known (divergent) asymptotic expansion [1, item 7.1.23] . This formula follows by integration by parts, and allows accurate evaluation for large positive η. It was shown in [17] that the error of evaluating 1 − Φ(η) with a truncation of the asymptotic expansion is less than 10 −11 for η > 5 with more than 8 terms in the summation of g(η). For η ≤ 5, 1 − Φ(η) can be accurately evaluated directly. Then we introduce a ratio . With the ratio β, the two fractions I1 I0 and I2 I0 can be evaluated by To avoid potential numerical instability of the first term in I2 I0 , we use the identity . Numerical experiments Now we numerically illustrate the EP algorithm on realistic images. In the implementation, we employ the natural parameter parameterization, i.e., Algorithm 1, which appears to be numerically more robust. We measure the accuracy of a reconstruction x * relative to the ground truth x † by the standard L 2 -error ||x * − x † || 2 , the structural similarity (SSIM) index (by MATLAB built-in ssim), and peak signal-to-noise ratio (PSNR) (by MATLAB built-in psnr with peak value 1). For comparison, we also present MAP, computed by a limited-memory BFGS algorithm [30] with constraint C 1 . The hyperparameter α in the prior p(x) is determined in a trial-and-error manner. Tables 2 and 3. The EP mean is mostly comparable with MAP in all three metrics, and the reconstruction quality improves steadily as the number of projection angles increases. These results show clearly the feasibility of EP for images of realistic size. Interestingly, the shape of the EP variance resembles closely that of the phantom. This might indicate that the algorithm is rather certain in the cold regions where the error is close to zero and more uncertain about the region where the error is potentially larger. To further illustrate the approximation, we plot the cross-sections in Fig. 5 and 95% highest posterior denstiy (HDP) region estimated from the EP covariance. The EP mean is close to MAP, and thus also suffers slightly from a reduced magnitude, as is typical of the total variation penalty in variational regularization. This also concurs with the error metrics in Tables 2 and 3. The thrust of EP is that it can also provide uncertainty estimates via covariance, which is directly unavailable from MAP. In sharp contrast, the popular Laplace approximation (see Appendix B) can fail to yield a reasonable approximation for nonsmooth priors, whereas MCMC is prohibitively expensive for large images, though being asympotically exact; see Appendix C for further numerical results. So overall, EP represents a computationally feasible approach to deliver uncertainty estimates. Next, we present an experimental evaluation of the convergence of the EP algorithm, which is a long outstanding theoretical issue, on the following setup: Shepp-Logan phantom and Radon matrix A ∈ R 4255×16384 (i.e., 185 projections per angle and [0 : 8 : 179]). We denote the mean and covariance after k outer iterations by µ k and C k , respectively, and the converged iterate tuple by (µ * , C * ). The EP mean µ k converges rapidly, and visually it reaches convergence after five iterations since thereafter the Fig. 7 shows the errors of the iterate tuple (µ k , C k ) with respect to (µ * , C * ), where the errors δµ = µ k − µ * and δC = C k − C * are measured by the L 2 -norm and spectral norm, respectively. This phenomenon is also observed for all other experiments, although not presented. Hence, both mean and covariance converge rapidly, showing the steady and fast convergence of EP. Last, we illustrate the inference procedure with data taken from Michigan Image Reconstruction These numerical results with different experimental settings show clearly that EP can provide comparable point estimates with MAP as well as uncertainty information by means of the variance estimate. Conclusion In this work, we have developed inference procedures for the constrained Poisson likelihood arising in emission tomography. They are based on expectation propagation developed in the machine learning community. The detailed derivation of the algorithms, complexity and their stable implementation are given for a Laplace type prior. Extensive numerical experiments show that the EP algorithm (with natural parameters) converges rapidly and can deliver an approximate posterior distribution with the approximate mean comparable with MAP, together with uncertainty estimate, and can scale to realistic real images. Thus, the approach can be viewed as a feasible fast alternative to the general-purposed but expensive MCMC for rapid uncertainty quantification with Poisson data. There are several avenues for future works. First, it is of enormous interest to analyze the convergence rate and accuracy of EP, and more general approximate inference techniques, e.g., variational Bayes, which have all achieved great practical successes but largely defied theoretical analysis. Second, it is important to further extend the flexibility of EP algorithms to more complex posterior distributions, e.g., lack of projection form. One notable example is isotropic total variation prior that appears frequently in practical imaging algorithms. This may require introducing an additional layer of approximation in the spirit of iteratively reweighed least-squares or Monte Carlo computation of low-dimensional integrals. Third, many experimental studies show that EP converges very fast, with convergence reached within five outer iterations for the Poisson model under considerations. However, the overall O(mn 2 ) computational complexity is still very high for all current implementations [19], and not scalable well to truly large images. Hence, it is of great interest to accelerate the algorithms, e.g., low-rank structure of the map A and diagonal dominance of the posterior covariance. A Parameterizing Gaussian distributions For a Gaussian N (x|µ, C) with mean µ ∈ R n and covariance C ∈ S n + , the density π(x|µ, C) is given by where the parameters Λ ∈ S n + , h ∈ R n and ζ ∈ R are respectively given by Λ = C −1 , h = Λµ, and ζ = − 1 2 (n log 2π + log |Λ| + µ t Λµ). Thus, the density π(x|µ, C) is also uniquely defined by Λ and h. In the literature, Λ is often referred to as the precision matrix and h as the precision mean. and the pair (h, Λ) is called the natural parameter of a Gaussian distribution. It is easy to check that the product of k Gaussians {N (x|µ k , C k )} m k=1 is also a Gaussian N (x|µ, C) after normalization, and the mean µ and covariance C of the product are given by B Laplace approximation In the engineering community, one popular approach to approximate the posterior distribution p(x|y) is Laplace approximation [42,4]. It constructs a Gaussian approximation by the second-order Taylor expansion of the negative log-posterior − log p(x|y) around MAPx. Upon ignoring the unimportant constant and smoothing the Laplace potential, the negative log-posterior J(x) is given by where > 0 is a small smoothing parameter to restore the differentiability. The gradient ∇J(x) and Hessian ∇ 2 J(x) are given respectively by Since ∇J(x) = 0, the Taylor expansion reads and ∇ 2 J(x) approximates the precision matrix. When |L t ix |, the second term in ∇ 2 J(x) can be negligible and thus the Hessian of the negative log-likelihood is dominating; whereas for |L t ix |, the second term is dominating. In either case, the approximation is problematic. In practice, it is also popular to combine smoothing with an iterative weighted approximation (e.g., lagged diffusivity approximation [44]) by fixing ((L t i x) 2 + 2 ) 1/2 in ∇J(x) at ((L t ix ) 2 + 2 ) 1/2 , which leads to a modified Hessian: The Hessians ∇ 2 J(x) and ∇ 2 J(x) will be close to each other, if |L t ix | are all small, which is expected to hold for truly sparse signals, i.e., L t i x ≈ 0 for i = 1, . . . , m 2 . One undesirable feature of Laplace approximation is that the precision approximation depends crucially on the smoothing parameter . C Comparison with MCMC and Laplace approximation Numerically, the accuracy of EP has found to be excellent in several studies [35,18], although there is still no rigorous justification. We provide an experimental evaluation of its accuracy with Markov chain Monte Carlo (MCMC) and Laplace approximation. The true posterior distribution p(x|y) can be explored by MCMC [31,36]. However, usually a large number of samples are required to obtain reliable statistics. Thus, to obtain further insights, we consider a one-dimensional problem, i.e., a Fredholm integral equation of the first kind [34] over the interval [−6, 6] with the kernel K(s, t) = φ(s − t) and exact solution x(t) = φ(t), where φ(s) = 10 + 10 cos π 3 sχ [− 3,3] . It is discretized by a standard piecewise constant Galerkin method, and the resulting problem is of size 100, i.e., x ∈ R 100 and A ∈ R 100×100 . We implement a random walk Metropolis-Hastings sampler with Gaussian proposals, and optimize the step size so that the acceptance ratio is close to 0.23 in order to ensure good convergence [6]. The hyperparameter α in the prior distribution is set to 1. The chain is run for a length of 2 × 10 7 , and the last 10 7 samples are used for computing the mean and covariance. To compare the Gaussian approximation by EP and MCMC results, we present the mean, MAP, covariance and 95% HPD region. Both approximations concentrate in the same region, and the shape and magnitude of 95% HPD / covariance are mostly comparable; see Figs. 10 and 11, showing the validity of EP. However, there are noticeable differences in the recovered mean: the EP mean is nearly piecewise constant, which differs from that by MCMC. So EP gives an intermediate approximation between the MAP and posterior mean. In comparison with MAP, EP provides not only a point estimate, but also the associated uncertainty, i.e., covariance. Interestingly, the covariance is clearly diagonal dominant, which suggests the use of a banded covariance or its Cholesky factor for speeding up the algorithm. The Laplace approximation described in Appendix B depends heavily on the smoothing parameter , and clearly there is a tradeoff between accuracy of MAP and the variance approximation; see Fig. 12 for the numerical results corresponding to four different smooth parameters , based on the approximation (B.1). This tradeoff is largely attributed to the nonsmooth Laplace type prior, which pose significant challenges for constructing the approximation. Thus, it fails to yield a reasonable approximation to the target posterior distribution. In contrast, the EP algorithm only involves integrals, which are more amenable to non-differentiability, and thus can handle nonsmooth priors naturally. In passing, we note that the uncertainty estimate from the posterior distribution differs greatly from the concept of noise variance [15], which is mainly concerned with the sensitivity of the reconstruction with respect to the noise in the input data y. It is derived using chain rule and implicit function theorem, under the assumptions of good smoothness and local strong convexity of the associated functional [15]. In contrast, the uncertainty in the Bayesian framework stems from imprecise knowledge of the inverse solution encoded in the prior and the statistics of the data. Thus, the results of these two approaches are not directly compared.
8,975.4
2018-10-18T00:00:00.000
[ "Mathematics", "Computer Science" ]
On Performance Evaluation of Inertial Navigation Systems: The Case of Stochastic Calibration In this work, we address the problem of rigorously evaluating the performances of an inertial navigation system (INS) during its design phase in presence of multiple alternative choices. We introduce a framework based on Monte Carlo simulations in which a standard extended Kalman filter is coupled with realistic and user-configurable noise generation mechanisms to recover a reference trajectory from noisy measurements. The evaluation of several statistical metrics of the solution, aggregated over hundreds of simulated realizations, provides reasonable estimates of the expected performances of the system in real-world conditions. This framework allows the user to make a choice between alternative setups. To show the generality of our approach, we consider an example application to the problem of stochastic calibration. Two competing stochastic modeling techniques, namely, the widely popular Allan variance linear regression and the emerging generalized method of wavelet moments, are rigorously compared in terms of the framework’s defined metrics and in multiple scenarios. We find that the latter provides substantial advantages for certain classes of inertial sensors. Our framework allows considering a wide range of problems related to the quantification of navigation system performances, such as the robustness of integrated navigation systems [such as INS/global navigation satellite system (GNSS)] with respect to outliers or other modeling imperfections. While real-world experiments are essential to assess to performance of new methods, they tend to be costly and are typically unable to lead to a sufficient number of replicates to provide suitable estimates of, for example, the correctness of the estimated uncertainty. Therefore, our method can contribute to bridging the gap between these experiments and pure statistical consideration as usually found in the stochastic calibration literature. On Performance Evaluation of Inertial Navigation Systems: The Case of Stochastic Calibration Davide A. Cucci , Lionel Voirol , Mehran Khaghani , and Stéphane Guerrier Abstract-In this work, we address the problem of rigorously evaluating the performances of an inertial navigation system (INS) during its design phase in presence of multiple alternative choices. We introduce a framework based on Monte Carlo simulations in which a standard extended Kalman filter is coupled with realistic and user-configurable noise generation mechanisms to recover a reference trajectory from noisy measurements. The evaluation of several statistical metrics of the solution, aggregated over hundreds of simulated realizations, provides reasonable estimates of the expected performances of the system in realworld conditions. This framework allows the user to make a choice between alternative setups. To show the generality of our approach, we consider an example application to the problem of stochastic calibration. Two competing stochastic modeling techniques, namely, the widely popular Allan variance linear regression and the emerging generalized method of wavelet moments, are rigorously compared in terms of the framework's defined metrics and in multiple scenarios. We find that the latter provides substantial advantages for certain classes of inertial sensors. Our framework allows considering a wide range of problems related to the quantification of navigation system performances, such as the robustness of integrated navigation systems [such as INS/global navigation satellite system (GNSS)] with respect to outliers or other modeling imperfections. While real-world experiments are essential to assess to performance of new methods, they tend to be costly and are typically unable to lead to a sufficient number of replicates to provide suitable estimates of, for example, the correctness of the estimated uncertainty. Therefore, our method can contribute to bridging the gap between these experiments and pure statistical consideration as usually found in the stochastic calibration literature. well-defined coordinate reference system. The inertial navigation system (INS) is found at the core of many modern navigation systems: inertial sensors keep track of the acceleration and the angular velocity (rotation rate) of the body, which can be integrated over time to estimate its position, velocity, and attitude [1]. INS is autonomous (largely unaffected by the surrounding environment), widely available and affordable, high rate of output (up to a few kHz), and is accurate in a short time. However, it suffers from an accumulation of random errors in time, eventually leading to unreliable position and orientation estimates. This limitation is magnified with low-grade inertial sensors, such as the ones typically employed in smartphones or drones, where position estimates can become practically unusable in as low as few tens of seconds. To control this inevitable position and orientation drift, inertial measurements are typically fused with aiding information coming from other sensors. In outdoor applications, such as in planes, cars, ships, and drones, the aiding data typically come from global navigation satellite systems (GNSSs). These systems provide absolute position, velocity, and timing (PVT) data at lower rates compared to INS (typically 1-20 Hz) but are immune to error accumulation. Therefore, GNSS is complementary to INS, and the INS-GNSS integration is widely used in practice. In indoor applications, or whenever the system cannot depend on the reliable and continuous reception of uncorrupted GNSS signals, such aiding information can come for example from cameras, in visual-inertial systems [2], or ultrawideband beacons [3]. The fusion of data from INS and other aiding systems, such as GNSS, is typically performed via variants of the Kalman filter, such as the commonly used extended Kalman filter (EKF) [4]. One of the most critical steps in the design of a navigation system is the quantification of its performance. When several types of heterogeneous sensors, such as inertial and GNSS, are fused together, many factors can influence the properties of the estimation error. Indeed, their combined impact then tends to be difficult to predict. Once the system is assembled, the experimental assessment is complicated by the difficulty of acquiring a sufficiently accurate ground truth. Such experiments can be even impossible because, for example, the safe and proper functioning of the platform's guidance and control system (such as the autopilot in drones) requires the correct functioning of the navigation system. Furthermore, the validity of performance indicators, such as the consistency of position and orientation uncertainty estimates, requires the execution of hundreds or thousands of trajectories, which is unfeasible in practice. In this work, we propose a framework for quantifying navigation system performances that rely on Monte Carlo simulations. We consider a classical EKF for INS/GNSS navigation that the user can configure according to different competing setups, such as sensor selection or modeling choices. Our framework allows performing thousands of Monte Carlo simulations using a user-specified trajectory that is representative of the chosen application scenario. In each run, we generate noise-free sensor readings from the user-supplied trajectory and corrupt those with realistic noise samples, either taken from static data (typically available and widely used in the case of inertial sensors) or sampled from a user-specified noise model. The results of all simulations are aggregated to compute several types of metrics and allow the user to compare the results quantitatively. Optionally, static data can be replaced with data collected in different dynamic conditions, e.g., using a rotation table, and noise samples can be chosen according to the dynamics of the user-specified trajectory. While this approach cannot entirely substitute the experimental evaluation of the final system, the use of realistic noise samples and user-defined noise generation mechanisms allows users to nearly reproduce real-world operation scenarios and substantially decrease the number of required experiments. The idea of generating synthetic sensor measurements as a way to validate navigation systems has been already proposed in the literature. Jwo et al. [5] focused on generating synthetic specific force and angular velocity measurements starting from a user-defined trajectory for which the kinematic properties are known analytically. However, this work ignores stochastic or deterministic error generation. Similarly, in [6], a motion model of the platform to be studied (a ship in this case) is used to generate a realistic trajectory, also considering the effects of the environment on the motion, such as wind and waves. Young et al. [7] provided an open-source simulator for inertial sensors that works by interpolating a discrete time trajectory obtained from a motion capture system. Other works, e.g., [8] and [9], also consider sampling random errors by relying on common stochastic models and their power spectral density functions. In our work, we aim at moving one step further in the quantification of performances of INSs by introducing a sensor fusion step based on an EKF that aims at recovering the true trajectory from noisy data. This step is repeated hundreds of times in a Monte Carlo fashion to determine reliable statistics of the navigation performances. Furthermore, we circumvent the problem of simulating realistic random errors, which is hard to validate in practice, since the true noise models for a given inertial sensor are unknown, by using blocks (containing a suitable number of data points) of real noise samples available from static acquisitions. This approach relies on the idea of block bootstrap, which is a standard statistical technique allowing to resample the dependent data [10]. To show the benefits of our approach, we consider an example application focusing on the problem of establishing a suitable stochastic model for inertial sensors. Indeed, inertial sensors, like any other sensor, have errors, both deterministic and stochastic. Deterministic errors, such as the stable parts of scale factors and axis nonorthogonality, can be precalibrated and removed from the measurements directly (for example, see [11] and the references therein). The additive stochastic part of the error, for example, composed of white noise (WN), turn-on biases, and other time-correlated processes, can only be taken into account "on-flight" during navigation, provided that a suitable stochastic model for the sensor errors has been established beforehand. The latter process is often referred to as "stochastic calibration," and it is an important step in navigation system design and implementation. Indeed, an accurate stochastic calibration allows, among others, to maximize the estimation accuracy by enabling the correct estimation and removal of the maximum possible portion of the stochastic errors from sensor measurements and identifying misbehaving sensors or measurements via fault detection and exclusion mechanisms. Stochastic calibration of inertial sensors has been widely studied in the last decades, and several methods are available, ranging from the Allan variance (AV) linear regression method [12], [13], which we refer to as AVLR, to the more recent Generalized Method of Wavelet Moments (GMWM) [14], [15], not to mention the methods based on the analysis of the power spectral density of the sensor errors [16], [17], correlation of filtered sensor outputs [18], and maximum-likelihood estimation [19]. The AVLR and the GMWM are among the most popular choices by practitioners and researchers, and can be employed to model stochastic errors in both static and dynamic conditions [20], [21]. In the context of stochastic calibration, each technique allows to consider a class of possible stochastic models and selects the one within this class that appears to be the most suitable according to some technique-specific "goodness-offit" metric. This means that different techniques typically yield different stochastic models, and many models can be practically equivalent given a metric. It is, in general, difficult to relate these goodness-of-fit metrics used in stochastic modeling to the actual navigation performances that users will obtain in their applications. Furthermore, the specific values determined for stochastic parameters, such as, for instance, the variance of the rate random walk (RW) innovations in a gyroscope, are very distant from navigation performance metrics meaningful for the final user, such as orientation error. Following our proposed approach, it is possible to compare different stochastic modeling techniques according to the criteria that are highly significant from the user's perspective, such as the obtained position and orientation estimation error statistics or the consistency of the derived confidence intervals, as obtained by the navigation filter in simulated, yet realistic, scenarios. As an illustrative example, in this work, we show that, in general, the models obtained with GMWM perform better in navigation than the ones obtained with the AVLR. This conclusion is supported by extensive simulation analyses based on real-world trajectories and further backed up by comparison against data from a real sensor. The differences are significant when a low-cost microelectromechanical system (MEMS) inertial measurement units (IMUs) are considered, while less pronounced if the stochastic nature of the error processes is the one typical of tactical or navigation grade sensors. The proposed framework also allows us to easily answer other subtle questions, such as evaluating the impact of mismodeling in stochastic calibration, for example, if the value of one parameter has been wrongly estimated, or if the stochastic model has been wrongly chosen, but also in relation to sensor selection, comparison, and performance evaluation in general. To the best of our knowledge, it is difficult to rigorously evaluate these aspects of the design of an INS without repeating the process of realistic noise generation and sensor fusion in a Monte Carlo fashion, as proposed in this work. The rest of this article is organized as follows. In Section II, we summarize the main contributions of this article. A brief review on stochastic calibration is given in Section III, specifically in relation to the two methods considered in this work, namely, the AVLR and the GMWM, discussed in Sections III-A and III-B, respectively. In Section IV, we introduce the proposed framework to evaluate the performances of navigation systems, and we discuss its components in detail. Such a framework is applied in Sections V and VI to compare the impact of the stochastic calibration technique, AVLR or GMWM, on the performances of the navigation filter, both based on simulated sensor errors and real-world ones. Finally, Section VII concludes this article. II. MAIN CONTRIBUTIONS The main contributions of this work are summarized as follows. 1) We introduce a framework based on Monte Carlo simulations and realistic, user-configurable noise generation mechanisms that allow us to closely replicate real-world navigation conditions. This framework can be used to assess and quantify the performances of an INS in different conditions, such as nominal, in the presence of GNSS outages, when sensor measurements are contaminated with outliers, or if the stochastic model for any sensor has been misspecified. The performance metrics evaluated by the framework also include measures of the correctness of the uncertainty estimates for the navigation states, which is often difficult to verify in practice. 2) To illustrate the generality of the proposed approach, we consider an example application targeting an open research question: we study the impact of different statistical procedures to determine a suitable stochastic calibration for inertial sensors, namely, the AVLR and the GMWM. We propose to compare different techniques not in terms of the usual metrics used in system identification, such as information criteria or the likelihood, but in terms of the actual navigation performances that can be expected once the determined model is employed in an INS-based navigation system. Due to its generality and flexibility, our framework provides a suitable environment to evaluate the performance of new stochastic calibration methods compared to the existing ones. 3) A comprehensive simulation study suggests that the AVLR, which is widely used in practice, in certain cases may lead to stochastic models characterized by worse performances in navigation, e.g., in terms of position and orientation errors or correctness of the estimated uncertainty. In contrast, this appears not to be the case if the GMWM is employed, and our study sheds new light on what the conditions are under which the differences in navigation performances are substantial. 4) Our framework is implemented in an open-source R package, making it available widely and allowing any user to easily perform a wide range of assessments of their own navigation system and reproduce our results. III. STOCHASTIC CALIBRATION OF INERTIAL SENSORS The measurements of inertial sensors are intrinsically affected by random errors that originate from the internal physics of the device and whose nature depends on the actual technology employed [22]. For instance, in a typical gyroscope, the error terms include WN, correlated random noise, bias instability, and RW. The sources of these errors differ from one gyroscope to another. Most of these error sources are correlated in time. Stochastic calibration refers to the modeling and characterization of such error sources for a specific device prior to its usage [11]. Indeed, any navigation filter performing information fusion between inertial, GNSS, and possibly other sensors needs correct stochastic models of the measurement errors. The quality of those models impacts directly the accuracy of the navigation solution and the correctness of its estimated uncertainty. In general, it is extremely difficult, if possible at all, to model the physical processes behind inertial measurement error, and anyways, such a model would be specific to a single device or family. Therefore, stochastic modeling methods are typically employed instead. Typically, a mixture of simple stochastic processes is selected such that it provides a suitable approximation of the stochastic properties of the measurement error. For example, one common choice is the sum of WN and RW processes. The selected model is then integrated into the navigation filter of choice (e.g., an EKF), after its characterizing parameters have been estimated. In the remainder of the section, we discuss two of the most common methods for the calibration of inertial sections, namely, the AV-and GMWM-based methods, whose performances are compared later in Sections V and VI. A. Allan Variance Linear Regression Method For inertial sensors, the most widespread modeling technique among practitioners and device manufacturers is the AVLR [12], conceived for the characterization of phase and frequency instability of precision oscillators, and suggested for the stochastic characterization of interferometric fiber optic gyroscopes in [17]. This is mainly a graphical technique based on the manual inspection of AV plots. The AV plot is derived from a sufficiently long error time series: several hours of sensor data are acquired, while the device is not moving, and thus, no signal is observed other than the measurement error and constant quantities, such as the Earth rotation rate and the gravity. A stochastic model for the measurement error, along with its characterizing parameters, can be determined based on the assumption that the different stochastic processes composing the total error appear in seemingly different regions of the AV plot as distinctive patterns, such as linear regions, as illustrated in Fig. 1. Practitioners typically model the error as a sum of simple stochastic processes, e.g., quantization noise (QN), WN, and RW, based on which patterns can be qualitatively identified in the AV plot. After the model has been selected, its parameters can be determined from those, for example, by means of linear regression. This procedure is commonly referred to as the AV linear regression method, and it is summarized in the IEEE standard [17]: "a first approximation [. . .] can be estimated by sketching in the asymptotes to the charted data analysis and computing approximate model coefficients." A comprehensive discussion of this approach is presented in [13]. The stochastic modeling procedures based on the AV suffer from several limitations. First, it relies on the assumption that only one underlying process is completely determining the shape of the AV at a given region of the plot. This perfect separation of the processes is actually not true as all underlying stochastic processes have an impact on the entire range in the AV plot. Consequently, the AV-based method leads to inconsistent estimated parameters characterized in practice by large biases even in the case of simple stochastic models (e.g., the sum of a WN and an RW [23]). The second limitation is that no pragmatic rule is available to the users to solve the model selection problem, i.e., decide which stochastic processes compose the total measurement error. Indeed, in realistic scenarios where multiple underlying stochastic processes are present, it is difficult to approximate the observed AV with simple stochastic processes having a linear representation in the standard AV log-log graph. More often, the empirical AV has a complex shape, and the user needs to resort to their intuition and experience to select a suitable model and the relevant scales on which to perform the linear regression. Finally, the AV technique is only able to estimate the parameters of models having a linear representation in the AV log-log plot. In order to obtain reasonable point estimates, it is also needed that the parameters of the underlying stochastic processes are such that each process dominates in a distinct region of the AV, which is often not the case in practice. The presence of correlated noise, e.g., first-order Gauss-Markov processes, which are not linear in the AV log-log plot, can render the use of the AVLR method even more challenging. B. Moment Matching Techniques Many methods have been proposed, which aims to minimize the distance between an empirical quantity (e.g., the AV) and its model-based counterpart (e.g., the AV implied by a given stochastic model), which can be expressed as a function of its parameters. An example of such methods developed in the context of inertial sensor stochastic calibration is the GMWM, as proposed in [14] and [24]. This method uses wavelet variance (WV) instead of the AV. This choice is due to the statistical properties of the WV, which have been studied further than the ones of the AV. However, these two quantities are very similar, and the interpretation of the AV presented in Section III-A also applies to the WV. Other momentmatching methods, such as the autonomous regression method for AV [25], have been proposed for the estimation of inertial sensors' stochastic models. However, Guerrier et al. [15] demonstrated that these methods can be seen as special cases of the standard GMWM approach. As previously mentioned, the underlying idea of the GMWM approach is to match the empirical and modelbased WVs. Indeed, the theoretical WV for several stochastic processes, such as autoregressive moving average (ARMA), QN, and WN, is known analytically as a function of the model parameters due to the results of Zhang [26]. Moreover, the WV of the sum of several stochastic processes is simply the sum of all the processes of WVs. Thus, the model parameters say that θ ∈ can be obtained as follows: where θ is the vector of model parameters,ν is the empirical WV of the time series, ν(θ ) is the theoretical WV implied by the parameter vector θ , and is a weight vector that depends on the uncertainty ofν. The optimization problem in (1) is typically solved iteratively using gradient-based or Gauss-Newton methods. If the WV of a given model is linear in the model parameters, which is the case for arbitrary sums of QN, WN, RW, and drift, the optimization problem can be solved in the closed form [15]. Additional information on this method can be found, for example, in [14] and [24]. IV. ASSESSMENT FRAMEWORK FOR STOCHASTIC MODEL IMPACT ON NAVIGATION As previously mentioned, stochastic calibration is required in order to characterize the inertial sensor measurement uncertainty so that the chosen sensor fusion algorithm can properly integrate them with other sensor measurements and estimate a reliable navigation solution. While many techniques exist for such purposes, such as those reviewed in Section III, it remains difficult to quantify the actual impact of a given choice of stochastic models on the navigation performances of the system in a given application scenario. For instance, the stochastic calibration procedure may yield an accurate model of the long-term correlations of the measurement error, whereas a much simpler model could perform similarly for the user application where, for instance, a drone is flown only for a few minutes at a time. As another example, many alternative models may exist for a given sensor that performs comparably with respect to the metric employed in stochastic calibration (i.e., similar value for the objective function defined in (1) at the solution), making the choice difficult and potentially subjective for the users. In this section, we propose an approach to quantify the impact on navigation performances of different, competing setups, such as sensor selection or modeling choices. The proposed approach is depicted in Fig. 2. To illustrate the generality of our framework, we will later consider the problem of comparing the stochastic calibrations obtained from different statistical procedures, namely, the AVLR method and the GMWM. More precisely, we consider a user-specified trajectory that is typical of the application scenario, available as samples of the body position and orientation. This trajectory will be used as a ground truth. The available samples are differentiated to obtain higher order kinematic states, such as velocities and accelerations, which are used to calculate noise-free readings, as they would be measured by perfect sensors mounted on the body. The noise-free readings are then corrupted with noise samples taken contiguously from random portions of static acquisition data (e.g., as collected during standard stochastic calibration procedures), thus consisting of samples of only the actual sensor noise. A sensor fusion algorithm is configured with the stochastic model under investigation, for example, determined with the AVLR method. The sensor fusion algorithm processes the noisy readings and estimates the final navigation solution. This procedure is repeated in a Monte Carlo fashion. The set of obtained navigation solutions is aggregated to compute a suite of statistics relevant to assess the expected performances of the system in the given scenario, for example, in terms of mean position and orientation error, or consistency of the estimated uncertainty. In order to evaluate the impact of different choices of sensor stochastic models, the user needs just to configure appropriately the sensor fusion algorithm and compare the different statistics obtained after the Monte Carlo simulations. As previously mentioned, the proposed approach has been implemented in the form of an open-source R package. 1 This software allows to perform easily all the steps outlined in Fig. 2 and compute many relevant statistics to compare competing stochastic models. We use this software package to perform all the investigations presented in the rest of this work, and therefore, our results can be easily replicated. In the remainder of the section, we discuss the different components of the framework in further detail. A. Noise-Free Measurements Generation We assume in this section that a suitable trajectory is available. This trajectory can be chosen to resemble the user's use case, for example, in terms of the dynamics of the body motion and duration. The latter is available as a sequence of positions, r t , and orientations, R n b,t , of the body frame b at the target inertial sensor rate. These are expressed with respect to a fixed, Cartesian, nonrotating navigation frame n. The trajectory is assumed to be the ground truth in the following; therefore, neither its accuracy nor how it was originally determined is relevant to the following. From the position and orientation samples, we derive higher order kinematic properties of the body frame as follows: where v n t and a n t are the body frame velocity and acceleration at time t expressed in n, respectively. Moreover, ω b nb,t is the body frame angular velocity of b with respect to n, expressed in b, log (·) is the logarithmic map in the 3-D rotation group SO(3), and t is the trajectory sampling time. Next, noise-free inertial sensor readings are computed. The specific force and the angular velocity readings are given by where g n is the gravity acceleration expressed in n, R n e is the (constant) orientation of the Earth-centered Earth-fixed (ECEF) frame e with respect to the navigation frame n, and ω e ne is the Earth rotation rate. In the following, we will ignore effects such as the Earth's rotation or a position-dependent gravity vector. While they are important in real-world navigation systems, at least above certain accuracy requirements, they are deterministic; once they have been accounted for in the sensor fusion algorithm, they play a limited or no role when looking at the performances of inertial stochastic models. We also note that, in MEMS IMUs, these effects are often masked by the nonnegligible turn-on bias and bias instability. For simplicity, position and orientation derivatives are evaluated in (2) by means of first-and second-order forward finite difference schemes. If desired, more sophisticated differentiators can be employed (see [27]). Alternatively, as done in [28] and [7], a continuous-time representation of the discrete input trajectory can be obtained using splines of sufficient order. From the splines, higher order kinematic quantities can be obtained analytically at any desired (continuous) time t. In this work, we have chosen the approach in 2 so that a Kalman filter employing the corresponding integration scheme (see Section IV-C) would recover the input trajectory exactly if noise-free measurements were provided as input. By doing so, no spurious integration noise is introduced by the filter process model, and the error sources under investigation, e.g., an imperfect stochastic model, can be investigated selectively. Also, note that, as shown, for example, in [29], differences between integration schemes tend to fade when nonnegligible random noise is considered. B. Noisy Measurement Generation We provide two methods for generating noisy measurements: using collected data from the sensor and using a reference model to simulate the noise data. In the first (and preferred) method, we assume that the user has collected data from the inertial sensor under evaluation in static conditions for stochastic calibration purposes. Since the device is static, the collected data consist of actual noise samples that can be used to corrupt noise-free measurements. Under the assumption that inertial sensor's performances are relatively independent of environmental conditions, the noise samples collected in static conditions have similar, if not the same, stochastic properties of the noise during real-world operation. Thus, we add contiguous chunks of the available static data to the noise-free inertial measurements to obtain the noisy ones. A different start time for each chunk is randomly selected for each Monte Carlo simulation. However, the assumed independence between IMU stochastic models and environmental conditions does not hold entirely, at least for low-cost IMUs. Indeed, the stochastic properties of the measurement error and/or the deterministic calibration (e.g., scale factors) may depend on temperature (see [30]) and motion dynamics [31]. The effect of the device temperatures can readily be investigated with the proposed framework simply collecting static data in a temperature chamber and varying the temperature according to the user application. However, the dependence on motion dynamics is more difficult both to model and then to compensate for in a sensor fusion system, and it is not considered in the following. Alternatively, the reference stochastic model can be assumed for the inertial sensors and sampled to obtain noise data. This is useful if static data are not available or, for example, if one wants to quickly evaluate different sensors available on the market. A priori stochastic model is also used to generate noise samples for the GNSS sensor. Indeed, differently from inertial sensors, it is well known that the noise properties of GNSS measurements are very much dependent on environmental conditions, such as receiver surroundings, satellite constellation, ionosphere and troposphere conditions, and so on, so that static data are probably not representative of realworld operations. Since this work focuses on inertial sensors, the GNSS measurement error is assumed to be white and uncorrelated. More realistic GNSS error samples could be obtained using a GNSS simulator and then used to corrupt noise-free position and velocity measurements. C. Sensor Fusion Algorithm Let N denote the number of Monte Carlo simulations. For simulation i, with i ∈ {1, . . . , N }, a sensor fusion algorithm is employed to fuse together noisy inertial and GNSS observations, and estimate the navigation solution, (i)r t and (i)Rn b t , as well as their estimated covariance (i) t . For the sensor fusion algorithm, we chose the EKF as this method is at the core of most of the real-time navigation systems, and its behavior is well-known and understood among practitioners. The choice of the sensor fusion algorithm does not affect the principle of operation of the approach proposed in Fig. 2, and any other method, such as unscented Kalman filters, particle filters, or even smoothing-based methods, such as dynamic networks [32], could replace the EKF and directly allow to compare the performances of different state estimation algorithms. We consider a standard error-state formulation of the EKF, as presented, for example, in [1]. In such EKF, the inertial sensor process noise models can be configured as an arbitrary sum of the following stochastic processes: 1) random constant; 2) WN; 3) autoregressive process of order one (AR1), which is equivalent to a first-order Gauss-Markov process; 4) RW; 5) drift. Multiple AR1s (or Gauss-Markov processes) can be considered, as it will be demonstrated in Section VI-A2. By combining the previously mentioned processes, a wide class of models can be constructed which includes standard models for inertial sensor errors. Other Gaussian processes, such as ARMA models of order higher than one, can be considered (see [33]). The proposed framework could be easily extended to include such processes. However, this choice does not appear to be widespread in practice and is left for future research. The implemented EKF employs a first-order Euler method to integrate the process model. This method is simplistic, and higher order integration methods are preferable in real-world applications. However, here, the goal is to evaluate the impact of the stochastic models on the sensor errors, not the fidelity of the filter mechanization. Thus, the latter has been chosen to match the noise-free measurement generation mechanism presented in Section IV-A so that it produces exactly the original position and orientation samples when integrating noise-free measurements. This choice prevents spurious integration noise to impact further analyses. D. Navigation Performance Statistics The navigation solutions are then aggregated to compute statistics that are useful to evaluate the performance of the navigation system. The details are given in the following. 1) Position and Orientation Error: The original trajectory provided by the user gives the ground truth, and each estimated solution can be compared to the reference in terms of position and orientation error where the (·) ∨ operator gives the three elements composing the skew-symmetric matrix returned by log(·), thus being a 3-D representation of the difference between the estimated and reference orientations. In the following, for simplicity, we will average both position and orientation errors over the three axes. The error samples computed with (4) can then be aggregated with respect to the different navigation solutions, with respect to time, or both, by means of any sample-based operation, such as the sample mean and sample covariance. Given the statistical properties of such sample-based estimators, and provided that N is sufficiently large (e.g., several hundreds), those quantities are expected to approach the true value of the underlying quantities. E. Normalized Estimation Error Squared The normalized estimation error squared (NEES) is a commonly employed metric to evaluate whether the error in position and orientation is consistent with their estimated covariance. An in-depth discussion can be found in [4,Ch. 3.7.4]. The NEES is defined as follows: In a Monte Carlo setup with N simulations, the average NEES at time t, which we denote as NEES t , follows (approximately) a chi-square distribution with 6N degrees of freedom (6 being the dimension of the stacked (i) r t and (i) R t vectors). Thus, a two-sided confidence interval for NEES t with level 1 − α can be expressed as [ϵ 1 , ϵ 2 ], where where χ 2 n (α) is the αth quantile of a chi-squared distribution with n degrees of freedom. A one-sided confidence interval can be constructed similarly. The evolution of NEES t with respect to the bounds [ϵ 1 , ϵ 2 ] is an important metric used in practice to assess whether the sensor fusion algorithm is overconfident, meaning that the actual error is typically larger than the estimated uncertainty of the position and orientation estimates, or underconfident, the other way around, or none of the two. F. Coverage The consistency and the efficiency of the estimated position and orientation, and of their estimated covariance, can be evaluated in an alternative way, as commonly done in the statistical literature. We first define the one-sided confidence interval of level 1 − α for (i) NEES t as follows: Next, we introduce the following binary variable: If the estimated covariance (i)ˆ t is sufficiently well estimated, (i) c t follows approximately a Bernoulli distribution with parameter p = 1 − α, allowing to assess the simulation error. The coverage of the confidence intervals defined in (7) is given by the average of (i) c t over the N Monte Carlo runs, c t . The coverage, c t , measures how often, in practice, the confidence intervals constructed from (i)ˆ t and centered at (i)r and (i)Rn b,t include, or cover, the true position and orientation. In other words, such confidence intervals are trustworthy if c t is close to 1−α. This measure provides an alternative approach to assess the quality of the computed navigation solution. V. CASE STUDY: ALLAN VARIANCE LINEAR REGRESSION METHOD VERSUS GMWM In this section, we apply the proposed framework to compare different statistical estimators, leading to different stochastic models for the inertial sensors in terms of navigation performance. More precisely, we compare the AVLR method and the GMWM, both briefly presented in Section III. This example is relevant since it allows investigating whether, and when, a more sophisticated statistical method, such as the GMWM, should be used instead of the commonly employed AVLR method. Indeed, we consider an experimental setup composed of two steps: 1) a calibration step, in which the stochastic models are determined from available noise data (for example, collected during static acquisitions) using the two previously mentioned estimators; 2) Monte Carlo navigation simulations, where the framework depicted in Fig. 2 is employed to quantify the navigation performances of the estimated models. In our experiment, the estimation of the parameters of the stochastic models taking place in the first step is performed based on several hours of data collected from a static device. Since the device is static, the collected measurements will be composed of noise samples only, and the selected stochastic modeling techniques are employed to determine the two sets of stochastic models (each set being composed of the gyroscope and accelerometer model) that we seek to compare. Next, we evaluate the determined stochastic models in a realistic scenario using the proposed Monte Carlo simulation framework. We consider a trajectory of a real fixed-wing unpiloted aerial vehicle (UAV) performing an aerial mapping mission lasting approximately 40 min. By performing hundreds of simulations, each time employing different noise realizations, we obtain reliable statistics of the navigation performances that can be obtained employing each of the two sets of models. From these statistics, the model that should be preferred for implementation in the final application can be picked. As discussed in detail in Section VI, we consider two scenarios. In the first scenario, we consider synthetic inertial sensors for which the true noise model is assumed to be known. The chosen models aim to mimic the observed AV/WV shape of real-world low-to-mid-grade inertial sensors. These models are sampled to generate both synthetic noise data for the stochastic calibration and the generation of noisy inertial readings for the Monte Carlo simulations. In the second scenario, a real sensor is considered, and multiple static acquisitions are performed to collect data for both the stochastic calibration and the realistic noise samples to corrupt inertial readings during navigation. A. Flexible Set of Stochastic Models We define a general set of models that can well approximate the vast majority of models used in inertial sensors. Such set is defined as the sum of M AR1s, which we write as first-order Gauss-Markov processes because of the practical meaning of the parameters (correlation time and variance of the innovation) where e(t) is the measurement error affecting the inertial sensor at time t, ξ i (t) is a (continuous time) WN with power spectral density q i , and τ c,i is the ith process correlation time. This class of models is very general and includes most of the processes typically considered in modeling inertial sensors, where some well-known special cases are given in the following. 1) WN, typically referred to as angular RW for gyroscopes and velocity RW for accelerometers, is obtained when τ c,i → 0. 2) RW, or rate RW, in the case of gyroscopes, when τ c,i → ∞ (any long-term correlation observed in practice can be modeled with a sufficiently high τ c,i ). 3) Bias instability, or flicker noise, is defined in terms of its power spectral density Since no state-space model can be derived for this process, it cannot be employed directly in state estimation algorithms, e.g., in an EKF. As suggested in [34,Sec. 4.3], or in [17], it is "sometimes approximated by a Markov model or a multiple stage ARMA model," such as the one in (9). 4) Turn-on bias, or random constant, the constant part of the sensor error that changes at every power cycle, can be modeled setting q i = 0 and τ c,i → ∞. Two other commonly employed processes do not fall in the proposed model family. Those are the QN and the drift or rate ramp in gyroscopes. Since both of these can be estimated with both the AVLR and the GMWM, they are not considered in this comparison study between those two. In stochastic calibration, it is more customary to consider the equivalent, discrete-time formulation of (9) where the well-known relation between the correlation time and the innovation power spectral densities, and the discrete time φ i ∈ [0, 1] and σ 2 i > 0 are given in the following: where f is the sampling frequency. For a given Gauss-Markov process i, the two different parameterizations (φ i , σ 2 i ) and (τ c,i , q i ) are equivalent given f , and they will be used interchangeably depending on which one is more intuitive in the given context. The GMWM provides estimates having suitable statistical properties for the set presented in (11) when the number of Gauss-Markov processes, M, is arbitrary but known and also to determine the optimal M for a given calibration data. On the contrary, the AVLR method would typically estimate the model, where M = 2 Gauss-Markov processes belonging to the WN (or angular or velocity RW) and RW (rate RW) special cases. The AVLR is also capable of estimating bias instability in terms of B and f , as in (10). However, it is difficult to relate those to a suitable state-space model that can be employed in an EKF. Given the specific features of the two stochastic calibration methods presented before, their models may be different. To illustrate this difference, we considered approximately Fig. 3. Top: 200-Hz static data acquired from the X -axis of a KVH1750 accelerometer (only the first hour is shown). Bottom: the empirical WV of the data (in red) and the theoretical WV as implied by the model obtained with AVLR (green) and the GMWM (brown). The decomposition of the GMWM model in four first-order Gauss-Markov processes, one of which degenerates into a WN, is also shown for the GMWM model. Fig. 4. Data acquisition setup. The sample IMU is mounted on a high-precision single-axis rotation table from Actidyn. In the figure, the IMU was offset with respect to the center of the rotation table to obtain a nonzero signal on the xand y-axes of the accelerometer in dynamic acquisitions. Courtesy of Clausen [11]. 2 h static data collected using the Novatel KVH 1750 IMU mounted on a high precision single-axis rotation table presented in Fig 4. In Fig 3, we compare the empirical WV (which can be interpreted similar to the AV) of the X -axis accelerometer with the models estimated with AVLR (in green) and GMWM (in brown). It can be observed that the AVLR model does not provide a close match to the empirical WV between scales 10 1 s and 10 3 . Indeed, the AVLR method does not allow estimating state-space stochastic models that would well approximate the flat region in the WV evident at those scales. On the contrary, the GMWM estimates appear to provide a better model fit considering a model composed of M = 4 Gauss-Markov processes, the first of which degenerates into a WN (τ c,1 = 0). The theoretical WV of the resulting model matches the observed one to an excellent degree. In particular, the GMWM allows obtaining a model for the long-term correlation of the error, for example, between scales 10 1 and 10 3 s. No such model is provided by the manufacturer [35] or can be estimated reliably with the available graphical methods based on the AV. In Section VI, we will investigate in which cases the extra modeling power offered by GMWM would lead to superior performances in navigation. VI. SIMULATION RESULTS In this section, we compare the performance of stochastic calibration based on the AVLR and the GMWM, respectively, in two different scenarios. In the first scenario, we assume that the stochastic model behind noise generation in inertial sensors is known and belongs to the set of models introduced in Section V-A. Many different parameter values are considered. In the second scenario, we consider a real sensor, the Xsens MTI-G IMU [36], for which data from a static acquisition are available. This sensor is characterized by a significant bias instability, and the WV of the static acquisitions matches the ones of some of the models considered in the first scenario. A. Scenario 1: Known Stochastic Models We consider a simulation scenario in which the true stochastic model behind inertial sensor noise is known. In the following, we first consider the class of models introduced (11), with M = 2, in Section VI-A1. Next, in Section VI-A2, we consider a more complex model family, where M = 3. In both cases, the first Gauss-Markov process (or AR1) always degenerates to a WN, e.g., τ c,1 → 0. 1) Model 1: 1 WN + 1 AR1: The stochastic model for the gyroscope is assumed to be the sum of a WN and a Gauss-Markov process: 24 different instances from this model class, each one being characterized by different parameters, have been chosen as follows. 1) The standard deviation of the WN is fixed at σ ξ = 10 −6 rad s −1 . 2) For the first 12 instances (large set from now on), the variance of the AR1 process is large compared to the WN, and it is small for the remaining 12 (small set from now on). The variance of the AR1 process is given by σ 2 ξ /(1 − φ 2 ), and it is kept constant to 5 × 10 −8 rad 2 s −2 for the large set and to 5 × 10 −9 rad 2 s −2 for the small set. 3) The jth instances of both the large and small sets, with j ∈ {1, 2, . . . , 12}, have the same value of φ, and φ j spans [0, 1]. In other words, we consider processes with increasing correlation time τ C in the first-order Gauss-Markov parameterization. Moreover, we consider a fixed model for the accelerometer noise, being a WN with standard deviation σ ξ = 5 × 10 −5 m s −2 . The resulting stochastic processes are realistic for MEMS IMUs. The theoretical WVs of the considered models are depicted in Fig. 5, with the large set being on the left and the small set on the right. For each of the model instances, we sample a time series, mimicking the static acquisition process that would provide the stochastic calibration data for a real inertial sensor. Next, we use the generated noise time series to estimate a stochastic model using the GMWM and the AVLR. The estimated model is used in an EKF to process noisy GNSS and inertial readings, corrupted by noise sampled the same way as for the synthetic static acquisition data. The performances of the navigation solution are measured according to the metrics presented in Sections IV-D-IV-F. This procedure is repeated 500 times for each model instance in a Monte Carlo fashion. From Fig. 5, it is possible to see that the WVs of some of the model instances (e.g., the one in violet) have large parts that are linear in the scales τ and can be well approximated by the sum of a WN and an RW using the AVLR. However, this does not hold for most of the others (e.g., the ones highlighted in red, green, and light blue): for those cases, the model obtained by means of the AVLR is different from the true one, in terms of its WV. In contrast, by means of the GMWM, it is possible to recover a good approximation of the true model for all the model instances. If the model employed in the EKF is substantially different from the true one, used to generate noisy inertial readings, the quality of the navigation solution may degrade. In the following, we quantify such degradation in terms of position and orientation errors within GNSS coverage, i.e., when a position fix is available and during a GNSS outage period of 60 s. The position and orientation errors for each model instance are presented in Figs. 6 and 7. These quantities correspond to (i) r t and (i) R t , as defined in Section IV, averaged over the Monte Carlo simulations and the three axes. The results are expressed in relative units, in terms of the performances of the model estimated by means of the AVLR versus the one estimated with the GMWM. For example, a value of 110% implies that the model estimated with the AVLR achieves, on average, a position (or orientation) error 10% higher than the model estimated with the GMWM. In these figures, and the ones that will follow, the x-axis is the time relative to the beginning of the GNSS outage, at second 0, while, on the y-axis, we have the time constant τ C of the autoregressive process contained in that specific model instance. From Figs. 6 and 7, we can observe the following. 1) When the WV of the true model can be well approximated by the sum of a WN and an RW, i.e., when φ → 1 or, equivalently, τ C is high, both the GMWM and the AVLR achieve similar performances both in terms of position and orientation error. This result is somewhat expected as, in this case, both the AVLR and the GMWM are able to provide suitable approximations of the underlying data-generating process. 2) When the WV of the true model presents a peak or, anyways, it is not linear in the scales τ (see again Fig. 5, e.g., the model highlighted in green), only GMWM can estimate an accurate model, and the performances degrade by up to 50% in position and 80% in orientation with AVLR method, as this method, unlike the GMWM, is unable to provide an accurate approximation of the underlying data-generating process. 3) The higher the variance of the AR1 process is (large set compared to small set, on the left and on the right in Figs. 6 and 7, respectively), the higher the degradation of the performances will be. This is intuitive since the AR1 process is the one that cannot be estimated with the AVLR technique. 4) The degradation in terms of position error is mostly visible during the GNSS outage period only. Indeed, the GNSS position fix corrects for any error accumulated because of poorly modeled inertial readings. If the AR1 has a large variance compared to the WN, a peak in the position error is visible also soon after the GNSS position fixes have become again available. A possible explanation for this effect is due to a wrong model for the inertial sensor, which will build a wrong covariance matrix in the EKF, requiring more time to converge after the recovery of GNSS measurements. Additional information on this effect is shown when discussing the estimated uncertainty of position and orientation. 5) An inaccurate stochastic model for the inertial sensors will have more impact on the orientation error than on the position error. In addition, this impact can be observed even when the GNSS position fix is available. This is expected because the orientation estimates are known to be more dependent on the quality of inertial sensor measurements and models. Next, we analyze the quality of the confidence intervals for the position and orientation as estimated by the EKF. During navigation, the filter maintains a covariance matrix over the states, from which confidence intervals can be derived for those. If the stochastic model for the inertial measurement assumed in the EKF does not correspond to the one behind true noise generation, the covariance matrix will not be a good estimate of the probability distribution of the states. To quantify this, we evaluate the coverage as defined in Section IV-F with 1 − α = 70%: if the states' uncertainty estimated by the EKF is correct, we expect that 66% < c t < 74%, the interval accounting for the Monte Carlo simulation error. Indeed, assuming the filter to provide exact coverage, the random variable (i) c t follows a Bernoulli distribution with parameter 1 − α. Therefore, using the central limit theorem, an approximate 95% confidence interval for the averaged (i) c t over N Monte Carlo simulation is given by for α = 30%. The results are depicted in Fig. 8 for the large model set and in Fig. 9 for the small model set. The results are expressed in terms of coverage error that we define as the difference between the assumed probability (i.e., 70%) and the empirical probability that the true states are within the computed 70% confidence interval, c t . Consequently, a negative value indicates an overconfident estimate of the position and orientation uncertainty, and vice versa. We can observe that the models obtained with the GMWM have the correct coverage (around 70% or around 0% coverage error) at all times, while the models obtained with the AVLR lead to a large underestimation of the position and orientation uncertainty when φ → 1 or, equivalently, τ C → ∞. This effect is more severe for the large model set, and less severe for the small one, as happened for the position and orientation relative error. This result suggests that, in practice, if the user is particularly interested in having a reliable estimate of the navigation state's uncertainty, the stochastic models obtained with GMWM can provide substantial advantages over the commonly used AVLR, at least for the specific structure of the underlying noise generation mechanism. 2) Model 2: 1 WN + 2 AR1s: We consider a more complex case in the following. This is similar to the one discussed before, with the exception that, this time, the true gyroscope noise model is the sum of two AR1s and a WN (M = 3). The parameters of such autoregressive processes have been chosen so that a flat region in the high scales of the theoretical WV appears. In this case, 12 different instances have been considered, and their theoretical WVs have been plotted in Fig. 10(a). Such shapes are typical of low-to-medium grade inertial sensors, e.g., MEMS, exhibiting nonnegligible bias instability. In Fig. 10(b), we have plotted one of the chosen instances, in green in both figures, and compared it with the empirical WV of a static acquisition of a real sensor, the MTI-G MEMS IMU. It can be observed that the shape of the WV is very similar, suggesting that our experimental setup considers a realistic scenario. Similar to the results obtained in Section VI-A1, it can be observed that a good estimate of the original model using the AVLR can only be obtained for some of the specific parameter values corresponding to correlation times of the two autoregressive processes that are either small or high. Instead, the GMWM allows estimating arbitrary mixtures of autoregressive processes, regardless of their correlation time, and, thus, recovers the underlying noise model with sufficient accuracy. We apply the same experimental procedure as the one discussed in Section VI-A1. The relative position and orientation errors achieved employing the models estimated with the AVLR versus the ones obtained with GMWM are shown in Fig. 11. The coverage error, as defined in the previous section, is also presented in Fig. 12, where it is possible to see that the models obtained with the AVLR lead to a large underestimation of the position and orientation uncertainty in the EKF. B. Scenario 2: Real Sensor Noise In this scenario, we consider a real IMU sensor, the Xsens MTI-G, a MEMs IMU, for which real data are available from static acquisitions in a controlled environment. As anticipated in Section VI-A, and shown in Fig. 10(b), such sensor exhibits nonnegligible bias instability which appears as a flat region in the wavelet variance plot in correspondence of high scales τ . The AVLR does not allow estimating a suitable state-space model for this data. Instead, a model for this can be obtained approximating the flat region of the empirical WV as the sum of multiple autoregressive processes that can be effectively estimated by means of the GMWM. The empirical WV of the noise data collected during the static acquisition is presented in Fig. 13, along with the theoretical WV of the model estimated with GMWM and its decomposition. It is possible to see that the accelerometer noise model of the MTI-G presented in Fig. 13(a) presents a similar pattern in the WV as the model considered in our experimental setup and represented in Fig. 10(b). The empirical WV of the real noise data and the theoretical one estimated with the GMWM model match to a remarkable extent once three AR1s are considered for the accelerometer and two for the gyroscope. The real sensor data used in this work are taken from the open-source imudata R package. 2 We evaluate the performances of the models estimated by means of the AVLR against GMWM in an EKF. This time, instead of simulating noise samples from a stochastic model known a priori, which is not available in the case of a real sensor, we corrupt inertial readings using real noise samples obtained during a static acquisition. An example of the raw data used in this section is shown in Fig. 14. Several hours of data are available, and each time, different contiguous sets are chosen in a Monte Carlo fashion, as discussed in Section IV. This approach permits to evaluate the performances of a given stochastic model when realistic noise samples are provided, more closely to a real-world application scenario. The obtained models are compared in terms of the relative position and orientation errors when configuring the EFK with the associated models. In Fig. 15, we show the position and orientation errors of the models estimated with the AVLR relative to the ones obtained with the GMWM. As anticipated, the models obtained by means of the AVLR fail to model accurately the stochastic nature of the error signal, and this translates into degraded navigation performances, up to 15% in position and 45% in orientation. The decomposition of the orientation error along the three axes is also shown to point out that, as expected, the maximum error is found on the yaw axis, and the AVLR model performs up to 50% worse than GMWM on that axis. These results show the potential benefit of performing stochastic calibration with GMWM on a real device and are in line with what is obtained in Section VI-A2: this can be seen by comparing Fig. 11, along the blue line. VII. CONCLUSION In this work, we propose a simulation-based framework allowing us to assess the performances of INS/GNSS navigation systems in realistic settings. Indeed, our approach allows considering flexible and user-configurable noise generation mechanisms and permits quantifying INS/GNSS navigation systems performances in different conditions (e.g., GNSS outages and inaccurate stochastic model for any sensor). Using this framework, we study the impact on navigation performances for different statistical procedures (AVLR and GMWM) used to obtain stochastic inertial stochastic models. By comparing various simulation settings, our results suggest that the commonly used AVLR method can lead to considerably worse performances compared with the GMWM in terms of position and orientation errors, as well as in terms of accuracy of the estimated states, especially for inertial sensors characterized by considerable bias instability. These results can have important practical implications, suggesting that the GMWM can provide significant improvements, in particular, when MEMS sensors are considered, where bias instability is often considerably high. Moreover, our framework provides a systematic method allowing researchers and practitioners to compare the performance of existing, and potentially new, stochastic calibration methods. In this work, we considered stochastic calibration as a proof of concept, but other problems related to the quantification of navigation system performances can be easily investigated with our frameworks, such as the robustness of an INS with respect to outliers or other modeling imperfections. Our method can bridge the gap between realworld experiments, which are always needed but often impractical, costly, and difficult to replicate a large number of times (e.g., to evaluate the correctness of estimated uncertainty) and pure statistical parameter estimation as done, for example, in the stochastic calibration literature.
14,298.2
2023-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Measurement of Argon emission spectral of ICP plasma using a diagnostic system based on photomultiplier tubes array Optical emission spectroscopy (OES) is one of the most important diagnostic tools in plasma physics. A self-built spectroscopic diagnostic system, owning temporal and spatial resolution, has been constructed using photo multiplier tubes (PMTs) array, spectrometer and other parts. The problem of superposition between inlet plane of bundle fiber and the focal plane of the spectrometer is analyzed and solved. In addition, the synchronization regulation of output of PMTs has been completed. This system is installed on an inductively coupled (ICP) plasma chamber in order to study the Argon (Ar) emission spectrum generated from typical radio frequency (RF) and pulse discharges. The test results show that the intensity of Ar emission spectrum increases with the power and pressure, but increase less with the flow and current ratio. Under pulse discharge condition, the intensity of spectrum does not change with the frequency, neither does the broadening of spectrum with time. Introduction In order to understand the plasma chemistry, various plasma process diagnostic tools such as OES and Langmuir probe have been developed to quantify the concentration of reactive species in the plasma [1].Because it is non-intrusive, inexpensive, and can be easily incorporated into an existing plasma reactor, the OES quickly gains popularity in the microelectronics industry for monitoring the plasma processing.Optical emission spectroscopy (OES) has been proved to be a powerful technique in plasma diagnostics [4][5][6][7].There are mainly two kinds of OES methods, one is measurement of the spectrum of certain wavelengths by PMT and slit [2][3][4][5], the other is using two-dimensional detector arrays such as charge coupled device (CCD) or intensified charge-coupled device (iCCD) camera to provide the spatial multi-channel spectrum of a range of wavelengths [6,7].The CCDs own various advantages including their subtle detector dimension, high sensitivity, and significant signal to noise ratio.However, the readout period is often longer and the time resolution is usually several KHz, which make the CCD or iCCD has stringent restrictions on the time response of the emission spectral measurement.In many cases, the readout time (~0.5 s) is much longer than the typical time constant of the plasma evolution; hence, temporal variation of the intensity is hardly to be obtained directly by using the spectroscopic system with a CCD or iCCD camera. In this paper, we analyzed the development of a novel spectroscopic measurement system based on PMTs array.We successfully constructed the measurement system to measure the spatially and temporally resolved intensity of plasma light emissions from ICP Ar RF discharge and pulse discharge. Spectroscopic system Figure 1 shows the schematic overview of the spectroscopic measurement system based on PMTs array.Emissions from an ICP plasma chamber formed from Argon gas are collimated by convex lenses and introduced to the measurement system by the inlet optical fibers with inlet core diameter of 2 mm.The other ends of the optical fibers are bundled in a one-dimensional vertical array fixed on a six-degree-of-freedom (6-D) adjusting mount.And the one-dimensional vertical array is coupled with the entrance slit of a Czerny-Turner spectrometer (Horiba iHR550) by the 6-D adjusting mount.The spectrometer has a focal length of 550 mm and equips three gratings of 1200, 1800 and 2400 grooves/mm respectively.The light from the ICP plasma chamber go into the spectrometer and then focus on the focal plane of the spectrometer outlet.Like the inlet fiber with the entrance slit, we also adjust the bundle fiber fixed on a 6-D adjusting mount to couple the inlet plane of the bundle fiber with the focal plane of the spectrometer.The inlet plane of the bundle fiber is comprised of 30×30 fibers and the outlet is comprised of 30 fibers.And each of these 30 fibers corresponds to every row of the 30×30 fibers of inlet plane.Eight of the 30 fibers are inserted into eight PMTs (HAMAMATSU R928), and each of them equips a high voltage socket (HAMAMTSU CC238).The PMTs convert the light signal from the 8 PMTs into eight channel electronic signals, and then the electronic signals are amplified by two preamplifiers (Stanford Research Systems, SR445A-350 MHz preamplifier, 4-channel).Every channel signal amplified is collected by a multichannel scaler (MCS, ORTEC), and then analyzed by a PC software. Focal plane adjustment In order to measure the spectrum accurately, the inlet plane of the bundle fibers must be coincided with the focal plane of the spectrometer.We design a fixture to fix the inlet plane of the bundle fibers on a 6-D adjusting mount, by which the inlet plane can be moved to coincide with the focal plane.As the focal plane is still, the inlet plane is moveable by the adjusting mount.Assuming the focal plane in a fixed coordinate OXYZ and the inlet plane in a mobile coordinate O'X'Y'Z' as shown in Figure 2. When the inlet plane is coincided with the focal plane, the coordinate OXYZ equals to the coordinate O'X'Y'Z'. Except to adjust the inlet plane to coincide with the focal plane, the outlet plane of inlet fiber also should be coincided with the entrance silt of spectrometer.To obtain a higher resolution, the entrance silt should be narrow enough.Similarly, there are 6 degrees between outlet plane and entrance silt.Therefore, the vertical fiber array 1mm×10mm of inlet plane remains to be modulated to coincide with the entrance silt to allow most of the light into the spectrometer.We also use a 6-D adjusting mount to achieve that. Performance evaluation Accurate evaluations of total system performances, such as sensitivity, signal-to-noise (S/N) ratio, magnification are essential for appropriate usage of the spectroscopic system.Optical system setup and calibration are carried out by using sharp spectrum from laser or lamp emissions.A stable Hg lamb is employed to establish initial alignment of the optical components so as to minimize the unexpected noise signal.As the output of each PMT varies, the uniformity of the output in light intensity measurement must be assured.At first, we turn the grating of spectrometer to a fixed wavelength, and then we move the X axis of the adjustment mount to make the light reflected on the focal plane aim at the central vertical array fiber in the inlet plane of bundle fibers.After that, we insert the out fiber corresponding to the central vertical array fiber into one of the 8 PMTs in turn.Moreover, we adjust the control voltage of the control circuit and discriminator voltage of the multichannel scaler simutaneously to obtain almost same output of the eight PMTs.During the measurement process, the dark signal from surrounding evironment could enter into the system and lead to a noise signal influencing the correct signal of light.Therefore, the discriminator should be selected carefully to filter the noise from the signal.In our measurement system, due to the weak dark signal, we choose the voltage value when the signal counter is maximum as the discriminator voltage value.The last verified value of control voltage, discriminator voltage and signal counters corresponding to the eight PMTs are listed in Table 1 in details.After the initial arrangement, we make the system calibration at He-Ne laser wavelength and then confirm that the system meets our requirement.3 Experimental performance and discussion ICP RF discharge In the study of low temperature ICP argon plasma, 1s and 2p levels of argon atoms are often focused by researchers and processing engineers.Argon atom 1s level has higher density and more energy, which can have a great influence on active particles produced in plasma chemical reaction [8].Due to the strong radiative transition from 2p to 1s, 2p level is very important for researching the argon plasma radiation properties [8].In our ICP rf Argon discharge process, the pressure is 20mTorr and the flow is 200sccm, hence the corona model is valid for analyzing the process.The reason is that other collisional processes are trivial and the density of metastables is too low to make a significant contribution to the excitation of the excited species under these conditions [9][10][11].In a corona model for these levels, we have the electron-impact excitation from the ground-state atom, Ar(gs) + e Ar(2p1 ) + e (1) and the spontaneous radiation, Ar(2p 1 )Ar(1s 2 ) + hv (2) The symbol gs denotes the ground state, 1s and 2p denote the Paschen 1s and 2p levels, e is for an electron and hv is for a photon. The emission intensity, I , from the excited state 2p 1 is given by The symbol A is the Einstein coefficient, n is the density and Q exc denotes the excitation rate coefficient [11]. Usually, strong emission lines from the Paschen 2p 1 can be observed easily with wavelength of 750.4 nm.Therefore, we choose the intensity of this wavelength to observe its variation with the various process parameters, such as pressure, flow, power and current ratio.The current ratio is ratio of the current of inside coil to the sum current of the both inside and outside coil.Under a low pressure (<100mTorr), the broadening of the wavelength profile is mainly the instrument broadening as the Doppler broadening and Stark broadening is too low to be observed.We get the intensity of each wavelength by fitting the data from the eight MCSs corresponding to the eight PMTs.The intensity variation under different process conditions is shown in Figure 3 from (a) to (d).It can be seen from figure 3 that with the increase of processing parameters, the intensity also increases except with the current ratio.As the current ratio increases, the intensity increases firstly and then decreases.And the intensity reaches to maximum value when the current ratio equals to 0.6.Although the intensity almost increases with the process parameters, it barely increases with the gas flow and evidently increases with the pressure and power, especially with power.The reasion lies in the fact that the pressure mainly affects the electronic temperature and power mainly affects electronic density.So while the pressure and power increase, the intensity is also increased due to the increase of electronic temperature and electronic density.With the increase of gas flow, the intensity will increase with the density of Argon gas increases while the pressure is stable.Besides, the variation of coil current ratio merely changes the distribution of plasma in space, which affect the light flux flow into the measurement system.When the current ratio is among 0.5 ~ 0.7, the plasma distribution in the chamber is uniform, and the measured value of light intensity is higher as a consequence. ICP pulse discharge During the pulse discharge, there exists a nonlinear function relation between the any process function R and discharge power P, as is shown below. The symbol τ is pulse period, X is a arbitrary function, P(t) is loaded pulse power and angle brackets show the time average value.The significance of equation ( 4) is that the average value of process function R does not equal to the value of process function under the time average power.As the equation ( 4) is nonlinear, the time average processing means inadequacy to depict the key parameters of pulse plasma, such as electronic temperature and density.Therefore, in order to research the pulse discharge plasma, the evolution of the plasma parameters should contain the time dimension.Due to the difficulty to obtain the change of plasma parameters with time, CCD or iCCD is not a suitable tool.However, the PMT has a good time sensitivity to get the light intensity of plasma, and can be used to analyze the light intensity variation with the process parameters.We get the results of the light intensity changes with three different pulse frequencies, as is shown in figure 4 millisecond is nearly unchanged when the pulse frequency changes.This is because the change of pulse frequency only changes the number of discharge, which does not affect the light intensity.Increasing the duty ratio of pulse means that the pulse on time is increased, which leads to the improvement of the total light intensity of a pulse cycle.However, when the pulse frequency is constant, changing the duty ratio does not alter the intensity per unit time.Figure 5 shows the intensity change of No.1-8 PMT with a time range of 0s to 6ms.Because of the instrument broadening, the light intensity profile is a Gaussian function distributing among the focal plane of spectrometer and is obtained by the eight vertical array fibers.Figure 5 shows that when the pulse discharge power is loaded by phase step, the light intensity does not change by phase step.Since the light intensity has a relation with plasma parameters, the results in figure 5 show that the plasma parameters evolution is nonlinear during the transient discharge process.Non-stable pulse power supply in pulse discharge steady state leads to unstable collecting spectral intensity, which will also affect the plasma parameters in steady state. Conclusion A measuring system consisted of photo multiplier tubes (PMTs) array, spectrometer and other parts has been developed.And the position of inlet plane of bundle fiber and the focal plane of the spectrometer have been analyzed under two different coordinates and are adjusted to coincide with each other.The output synchronization calibration of the eight PMTs also has been completed.After that, the Ar emission spectrum intensity from typical rf and pulse discharges are collected by this system.The measurement results show that the intensity of Ar emission spectrum increases with the power and pressure but increase less with the gas flow and current ratio.The intensity of the pulsed discharge is almost unchanged with the frequency.Since the light intensity of the pulsed discharge has a relation with plasma parameters, the results show that the plasma parameters evolution is nonlinear during the transient discharge process.For the spectrum broadening in low pressure is too narrow to be observed.The broadening is mainly the instrument broadening.However, the spectrum broadening in high pressure discharge is the convolution of Doppler broadening and Stark broadening and is wide enough to be observed, so we can make use of this system to obtain the spectrum profile so as to analyze the ion temperature, which will be our further work. Fig. 1 . Fig.1.Schematic overview of the spectroscopic measurement system Fig. 2 .Fig. 3 . Fig.2.The coordinate of focal plane and inlet plane of bundle fibers from (a) to (c).And we also obtain the light intensity changes of No.1-8 PMT with time, as is shown in figure5. Figure 4 ( Figure4(a) to 4(c) show that the light intensity varies with the pulse cycle.But the light intensity per 1 millisecond is nearly unchanged when the pulse frequency changes.This is because the change of pulse frequency only changes the number of discharge, which does not affect the light intensity.Increasing the duty ratio of pulse means that the pulse on time is increased, which leads to the improvement of the total light intensity of a pulse cycle.However, when the pulse frequency is constant, changing the duty ratio does not alter the intensity per unit time.Figure5shows the intensity change of No.1-8 PMT with a time range of 0s to 6ms.Because of the instrument broadening, the light intensity profile is a Gaussian function distributing among the focal plane of spectrometer and is obtained by the eight vertical array fibers.Figure5shows that when the pulse discharge power is loaded by phase step, the light intensity does not change by phase step.Since the light intensity has a relation with plasma parameters, the results in figure5show that the plasma parameters evolution is nonlinear during the transient discharge process.Non-stable pulse power supply in pulse discharge steady state leads to unstable collecting spectral intensity, which will also affect the plasma parameters in steady state. Table 1 . Control voltage, discriminator voltage and signal counters corresponding to the eight PMTs a Discriminator voltage b Control voltage c ΔCounts is the difference value between each PMT's value and the average value of the eight PMTs' value.
3,635.8
2017-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Landsat Analysis Ready Data for Global Land Cover and Land Cover Change Mapping The multi-decadal Landsat data record is a unique tool for global land cover and land use change analysis. However, the large volume of the Landsat image archive and inconsistent coverage of clear-sky observations hamper land cover monitoring at large geographic extent. Here, we present a consistently processed and temporally aggregated Landsat Analysis Ready Data produced by the Global Land Analysis and Discovery team at the University of Maryland (GLAD ARD) suitable for national to global empirical land cover mapping and change detection. The GLAD ARD represent a 16-day time-series of tiled Landsat normalized surface reflectance from 1997 to present, updated annually, and designed for land cover monitoring at global to local scales. A set of tools for multi-temporal data processing and characterization using machine learning provided with GLAD ARD serves as an end-to-end solution for Landsat-based natural resource assessment and monitoring. The GLAD ARD data and tools have been implemented at the national, regional, and global extent for water, forest, and crop mapping. The GLAD ARD data and tools are available at the GLAD website for free access. Introduction The joint National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS) Landsat program, which started in the early 1970s, provides the longest continuous global archive of the satellite earth observation data. Since the launch of Landsat 4 (1982), satellite data have been collected at the same spatial resolution (30m per pixel) and with similar spectral bands, enabling a multi-decadal analysis of land cover and land use. All Landsat data have been provided at no cost to users since 2008 [1]. Globally consistent Collection 1 data processing [2] includes geometric and radiometric correction and observation quality assessment. The free and open data policy and consistent imagery format promoted the use of Landsat data and increased the variety of data applications [3]. Given the "time machine" capabilities of the Landsat archive, it is extensively used for land cover and land use change assessment [4,5]. In recent decades, development of high-performance computing and machine learning algorithms has allowed scaling up image characterization and change detection approaches to global extent [6][7][8][9]. The methods for globally consistent, multi-temporal land cover characterization and change detection were developed in the late 1990s-early 2000s using the low spatial resolution data from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging System-2 (WRS) scenes which are located within ice-free land area. Small islands (where no Tier 1 data exist) and the high Arctic and Antarctic regions are excluded from ARD processing. The purpose of the ARD is to map land cover and land use during the growing season, hence images affected by seasonal snow cover are excluded from processing. The seasonal snow cover was analyzed using the MODIS/Terra Snow Cover Monthly L3 Global product (https://nsidc.org/data/MOD10CM/versions/6) and Landsat imagery. We excluded all 16-day intervals (see Section 2.1.5) that feature seasonal snow cover. The snow-free window duration ( Figure 1A) ranges from 47 days (three 16-day intervals) in the Arctic to the entire year (51% of all selected WRS path/rows). Almost 3 million images (2,984,860) from 1 January 1997, to 31 October 2019, were selected and processed to create the global ARD. The annual image count (Figure 2) reflects the number of operational instruments, data acquisition strategy, and Landsat TM sensor issues precluding correct image processing in the years 2001 and 2002 (see Section 2.1.3). Globally, dry tropical and subtropical regions feature the highest frequency of observations ( Figure 1B). Humid tropics (where permanent cloud cover hampers image geolocation) and high latitude regions (where snow-free season is short) feature low frequency of selected observations. The Tier 1 data delivered as precision and terrain corrected products (L1TP) with image-toimage registration Root Mean Square Error (RMSE) of or below 12 meters [2]. Such high geolocation quality is suitable for time-series analysis without further adjustments. Conversion to Radiometric Quantity Due to the differences in spectral band configuration between Landsat sensors, only spectral bands with matching wavelengths between TM, ETM+, and OLI/TIRS sensors are processed (Table 1). For the thermal infrared data, we use the high-gain mode thermal band (band 62) of the ETM+ sensor and 10.6-11.19 μm thermal band (band 10) of the TIRS sensor. Landsat Collection 1 data contain radiation measurements for reflective visible/infrared bands in the form of scaled reflectance (OLI) or radiance (TM/ETM+) recorded as integer digital numbers (DNs) [2]. We convert the data into top-of-atmosphere (TOA) reflectance, scaled consistently across all Landsat sensors. Spectral reflectance (value range from zero to one) is scaled from 1 to 40,000 and recorded as a 16-bit unsigned integer value. System-2 (WRS) scenes which are located within ice-free land area. Small islands (where no Tier 1 data exist) and the high Arctic and Antarctic regions are excluded from ARD processing. The purpose of the ARD is to map land cover and land use during the growing season, hence images affected by seasonal snow cover are excluded from processing. The seasonal snow cover was analyzed using the MODIS/Terra Snow Cover Monthly L3 Global product (https://nsidc.org/data/MOD10CM/versions/6) and Landsat imagery. We excluded all 16-day intervals (see Section 2.1.5) that feature seasonal snow cover. The snow-free window duration ( Figure 1A) ranges from 47 days (three 16-day intervals) in the Arctic to the entire year (51% of all selected WRS path/rows). Almost 3 million images (2,984,860) from 1 January 1997, to 31 October 2019, were selected and processed to create the global ARD. The annual image count (Figure 2) reflects the number of operational instruments, data acquisition strategy, and Landsat TM sensor issues precluding correct image processing in the years 2001 and 2002 (see Section 2.1.3). Globally, dry tropical and subtropical regions feature the highest frequency of observations ( Figure 1B). Humid tropics (where permanent cloud cover hampers image geolocation) and high latitude regions (where snow-free season is short) feature low frequency of selected observations. The Tier 1 data delivered as precision and terrain corrected products (L1TP) with image-toimage registration Root Mean Square Error (RMSE) of or below 12 meters [2]. Such high geolocation quality is suitable for time-series analysis without further adjustments. Conversion to Radiometric Quantity Due to the differences in spectral band configuration between Landsat sensors, only spectral bands with matching wavelengths between TM, ETM+, and OLI/TIRS sensors are processed (Table 1). For the thermal infrared data, we use the high-gain mode thermal band (band 62) of the ETM+ sensor and 10.6-11.19 μm thermal band (band 10) of the TIRS sensor. Landsat Collection 1 data contain radiation measurements for reflective visible/infrared bands in the form of scaled reflectance (OLI) or radiance (TM/ETM+) recorded as integer digital numbers (DNs) [2]. We convert the data into top-of-atmosphere (TOA) reflectance, scaled consistently across all Landsat sensors. Spectral reflectance (value range from zero to one) is scaled from 1 to 40,000 and recorded as a 16-bit unsigned integer value. Conversion to Radiometric Quantity Due to the differences in spectral band configuration between Landsat sensors, only spectral bands with matching wavelengths between TM, ETM+, and OLI/TIRS sensors are processed (Table 1). For the thermal infrared data, we use the high-gain mode thermal band (band 62) of the ETM+ sensor and 10.6-11.19 µm thermal band (band 10) of the TIRS sensor. Landsat Collection 1 data contain radiation measurements for reflective visible/infrared bands in the form of scaled reflectance (OLI) or radiance (TM/ETM+) recorded as integer digital numbers (DNs) [2]. We convert the data into top-of-atmosphere (TOA) reflectance, scaled consistently across all Landsat sensors. Spectral reflectance (value range from zero to one) is scaled from 1 to 40,000 and recorded as a 16-bit unsigned integer value. For the TM and ETM+ data, we use the TOA conversion methods and coefficients from [24], see Equation (1). For the ETM+ sensor, two sets of gain and bias factors are implemented corresponding to high or low gain data quantization settings [24]. The correct coefficients are selected by checking the per-band "GAIN" metadata parameter. In rare cases, the gain setting changed within the recorded scene, which is indicated by the "GAIN_CHANGE" metadata parameter. For such scenes, we process only the northern portion of the image and erase data for the rest of the image. The OLI data are provided as TOA reflectance without solar zenith correction. We apply Equation (2) to perform the correction for the incoming solar radiation angle. The thermal band is converted into brightness temperature and recorded in Kelvin × 100 to preserve measurement precision (Equation (3)). T B -scaled brightness temperature; K1 and K2-calibration coefficients; G-gain factor; DN-original digital number; B-bias factor. Parameters G, B, K1, and K2 are taken from [24] for TM/ETM+ sensors and from the image metadata for the TIRS sensor. Observation Quality Assessment The per-pixel observation quality assessment is used to highlight observations with a high probability of atmospheric contamination by clouds, haze, or cloud shadows. In addition, observation quality assessment performs generic snow/ice and water mapping. Observation quality assessment is based on the aggregation of the Landsat quality assessment band and GLAD quality assessment model output. The Landsat Collection 1 data include a Quality Assessment (QA) band based on the globally consistent CFMask cloud and cloud shadow detection algorithm [25,26]. The QA band contains the cirrus cloud (Landsat 8 only), clouds, cloud shadow, snow/ice, and radiometric saturation flags [27]. The GLAD observation quality assessment model developed by our team represents a set of regionally adapted decision tree ensembles [28] to map the likelihood of a pixel to represent cloud, cloud shadow, heavy haze, and, for clear-sky observations, water or snow/ice. The decision tree models were developed for global Landsat processing [6] and later improved at the regional level [19,29]. To improve the cloud and cloud shadow mapping, the models are created separately for TM, ETM+, and OLI sensors. Each region (Africa, Australia, South and Central America, South and Southeast Asia, boreal and temperate Eurasia and North America) has a separate set of sensor-specific models. To build each set of models, we used from 100 to 200 Landsat image scenes which were classified into land, water, clouds, cloud shadows, snow/ice, and haze by experts. Each model was derived from the training data and applied to a random set of images within the corresponding region. We iterated the model by adding new training data until the model performance was considered optimal. The GLAD observation quality assessment models are applied to each image individually. The input data include Landsat reflective and thermal bands, band ratios, 3 × 3 focal means of each band and ratio, and topography variables that include elevation, slope, and aspect derived from the Shuttle Radar Topography Mission Digital Elevation Model (SRTM DEM) from 60 • North to 60 • South and ASTER Global Digital Elevation Model (GDEM) in polar regions. The model outputs represent likelihoods of assigning a pixel to the cloud, shadow, haze, snow/ice, and water classes. A comparison of the GLAD and CFMask cloud and cloud shadow detection results in Southeast Asia (Table 2) suggests the importance of the model results aggregation. The algorithms have a high agreement for cloud detection; however, they provide complementary information for mapping cloud shadows. Since our primary goal was to reduce the presence of clouds and shadows in the time-series data, we decided to merge the CFMask product with the GLAD algorithm output. From the CFMask product, we use high-probability clouds, shadows, and snow/ice flags. From the GLAD model outputs, we assign categories based on the likelihoods of thematic classes. This way, cloud, shadow, haze, water, snow/ice, and land masks are created for each Landsat image. The masks were subsequently aggregated into an integral observation Quality Flag (QF) that highlights cloud/shadow contaminated observations, separates topographic shadows from likely cloud shadows, and specifies the proximity to clouds and cloud shadows. To derive QF, we implement buffering around cloud and shadow pixels, calculate the distance to clouds (along cloud shadow projection), and calculate areas affected by topographic shadows using the DEM and sun position. The list of criteria for output QFs is presented in Table 3 (values [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. For the Landsat 5 TM sensor, we applied an additional observation quality check to remove sensor errors. Specifically, we excluded observations which have incorrect (usually, abnormally low) radiance measurements for selected bands. We assigned a "no data" flag to all pixels that have DN values for visible and NIR bands below 7 (empirically derived threshold). For Landsat 5 data from the years 2001 and 2002, when most of the sensor anomalies were detected, an image was removed from ARD processing if it contained more than 10,000 of such pixels. Reflectance Normalization Reflectance normalization is a required step that allows extrapolation of the image characterization models in time and space by ensuring spectral similarity of the same land-cover types. Normalization addresses several factors that affect surface reflectance measurement from space, including scattering and attenuation of radiation passing through the atmosphere, and surface anisotropy. We implemented a relative normalization procedure [18][19][20] that is not computationally intensive and does not require synchronously collected or historical data on atmospheric properties [30] and land-cover specific anisotropy correction factors [31]. The normalized surface reflectance is not equal to surface reflectance derived using atmospheric transfer models and a solution for the Bidirectional Reflectance Distribution Function (BRDF). The GLAD ARD data was designed for land cover and land cover change mapping and should not be used as a source dataset for the analysis of surface reflectance properties. The Landsat image normalization consists of four steps: production of the normalization target dataset; selection of pseudo-invariant objects; model parametrization; and model application. Cloud shadow Shadow detected. The pixels located within the projection of a detected cloud. Cloud projection defined using solar elevation and azimuth and limited to 9 km distance from the cloud. Topographic shadow Shadow detected. The pixel located outside cloud projections and within estimated topographic shadow (estimated using DEM and solar elevation and azimuth). 6 Snow/Ice Snow or ice detected. Cloud proximity Aggregation (OR) of two rules: (i) 1-pixel buffer around detected clouds. (ii) Above-zero cloud likelihood (estimated by GLAD cloud detection model) within 3-pixel buffer around detected clouds. 9 Shadow proximity Shadow likelihood (estimated by GLAD shadow detection model) above 10% for pixels either (i) located within the projection of a detected cloud; OR (ii) within 3 pixels of a detected cloud or cloud shadow. 10 Other shadows Shadow detected. The pixel located outside the projection of a detected cloud and outside of estimated topographic shadow. 11 Additional cloud proximity over land Clear-sky land pixels located closer than 7 pixels of detected clouds 12 Additional cloud proximity over water Clear-sky water pixels located closer than 7 pixels of detected clouds 14 Additional shadow proximity over land Clear-sky land pixels located closer than 7 pixels of detected cloud shadows 15 Same as code 1. Land Codes 15-17 are identical to codes 1, 11 and 14 except for the presence of water in a given 16-day composite. These codes indicate that water was detected in this 16-day interval, but was not used for compositing, because a land observation was also present within the same 16 days. Such conditions may occur within intermittent water bodies, wetlands, rice paddies, etc. These codes are created to facilitate the analysis of water dynamics. (1) Normalization target We derived the target surface reflectance data from twelve years (2000-2011) of MODIS/Terra imagery. The MODIS 16-day surface reflectance data [32] for selected spectral bands (see Table 1) were collected from the MOD44C product with a spatial resolution of 250m/pixel [33]. The MODIS time-series analysis to produce a normalization target included three steps. First, we filtered out all observations with atmospheric contamination and a high off-nadir angle using ancillary data included in the MOD44C product. Second, we calculated the Normalized Difference Vegetation Index (NDVI) for each observation and ranked all observation dates by the corresponding NDVI value. Third, we calculated the average spectral reflectance for all observations with NDVI above the 75th percentile. The resulting growing season average spectral reflectance was re-scaled to match the Landsat TOA reflectance data (to the range from 1 to 40,000) and resampled to the Landsat spatial resolution. We did not use the MODIS Nadir BRDF-Adjusted Reflectance (NBAR) product as a normalization target for two reasons. First, the NBAR data are only available at 500 m/pixel spatial resolution. Second, no high quality NBAR products were available when the GLAD ARD system was developed, and we decided to keep the MOD44C-based normalization target for product consistency. (2) Pseudo-Invariant Objects The mask of pseudo-invariant objects is derived automatically and used to calibrate the per-scene surface reflectance normalization model. The mask includes clear-sky land observations (pixels) that represent the same land cover type and phenology stage in the Landsat image and MODIS normalization target composite. Water and snow/ice observations are excluded from the mask due to different properties of surface anisotropy. To select the pseudo-invariant pixels, we first exclude all observations except clear-sky land using the scene QF. Second, we calculate the absolute difference between Landsat and MODIS spectral reflectance for red and shortwave infrared bands. Only pixels with differences below 0.1 reflectance value for both spectral bands qualify for the pseudo-invariant mask. Bright objects (with red band reflectance above 0.5) are excluded from the mask. To avoid reflectance normalization artifacts due to insufficient calibration data, Landsat images with less than 10,000 pseudo-invariant pixels are discarded from the processing chain. (3) Model Parametrization To parametrize the reflectance normalization model, we calculate the bias between Landsat TOA reflectance and MODIS surface reflectance for each spectral band within the mask of pseudo-invariant objects. We collect per-band median bias for each 10 km interval of distance from the Landsat ground track. The set of median values is used to parametrize a per-band linear regression model using least squares fitting method. For each image and each spectral band, we derive gain (G) and bias (B) coefficients to predict the reflectance bias as a function of the distance from the ground track (Equation (4)). For Landsat scenes with a small land fraction (less than 1/16 of the image), we calculate a mean reflectance bias (coefficient G set to 0). Such conditions are usually found in coastal regions. For the brightness temperature band, we calculate a single mean bias value for all pseudo-invariant target pixels within the image. ∆ -reflectance bias; G-gain factor; d-distance from the Landsat ground track; B-bias factor. Figure 3 illustrates the reflectance normalization model calibration for a Landsat scene in the Brazilian Amazon. Spectral reflectance correction using the bias adjustment is similar to the dark-object subtraction method [22]. By using MODIS spectral data, we ensure automatic model applicability for various geographic regions and land cover types. Reflectance bias modeling from the distance to ground track (related to the off-nadir angle) allows us to implement both bias-adjustment and surface anisotropy correction as a single, computationally simple, step. Average global calibration parameters presented in Table 4 illustrate the general properties of spectral reflectance correction during the normalization process. The bias coefficient (B) is the highest for the visible bands which are most affected by Rayleigh scattering, hence Landsat TOA reflectance is higher compared to MODIS surface reflectance. The bias coefficient decreases with wavelength increase and is negative for shortwave bands affected by radiation attenuation. The gain coefficient (G) has a small positive value, which reflects the generic features of land surface anisotropy that affects observations from a narrow field of view, AM overpass satellite system, such as Landsat. The gain and bias coefficients have pronounced geographic variation ( Figure 4). The bias coefficient, especially for visible bands, has high average values in moist climates and low values in dry climates, especially over deserts. The surface anisotropy correction mostly affects observations over tall vegetation, such as tropical and temperate forests. Average global calibration parameters presented in Table 4 illustrate the general properties of spectral reflectance correction during the normalization process. The bias coefficient (B) is the highest for the visible bands which are most affected by Rayleigh scattering, hence Landsat TOA reflectance is higher compared to MODIS surface reflectance. The bias coefficient decreases with wavelength increase and is negative for shortwave bands affected by radiation attenuation. The gain coefficient (G) has a small positive value, which reflects the generic features of land surface anisotropy that affects observations from a narrow field of view, AM overpass satellite system, such as Landsat. The gain and bias coefficients have pronounced geographic variation ( Figure 4). The bias coefficient, especially for visible bands, has high average values in moist climates and low values in dry climates, especially over deserts. The surface anisotropy correction mostly affects observations over tall vegetation, such as tropical and temperate forests. Average global calibration parameters presented in Table 4 illustrate the general properties of spectral reflectance correction during the normalization process. The bias coefficient (B) is the highest for the visible bands which are most affected by Rayleigh scattering, hence Landsat TOA reflectance is higher compared to MODIS surface reflectance. The bias coefficient decreases with wavelength increase and is negative for shortwave bands affected by radiation attenuation. The gain coefficient (G) has a small positive value, which reflects the generic features of land surface anisotropy that affects observations from a narrow field of view, AM overpass satellite system, such as Landsat. The gain and bias coefficients have pronounced geographic variation ( Figure 4). The bias coefficient, especially for visible bands, has high average values in moist climates and low values in dry climates, especially over deserts. The surface anisotropy correction mostly affects observations over tall vegetation, such as tropical and temperate forests. After the gain and bias coefficients are derived for each spectral band, we apply the resulting models to the entire Landsat image. The normalized surface reflectance is calculated per-pixel using Equation (5). To apply the model, we use the raster layer of distances from the ground track (in meters) that is calculated for each WRS from the Landsat orbital parameters. NORM -normalized surface reflectance; TOA -TOA reflectance; G-gain factor; d-distance from the Landsat ground track; B-bias factor. GLAD ARD normalized surface reflectance is highly correlated to the MODIS surface reflectance data used for normalization model parametrization ( Figure 5). To illustrate the GLAD ARD product properties, we compared the normalized surface reflectance of red, NIR, and SWIR (1.6 µm) spectral bands with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data (MCD43A4). The MODIS NBAR data are collected daily from Terra and Aqua MODIS imagery at 500 m spatial resolution (https://lpdaac.usgs.gov/products/mcd43a4v006/). The MODIS data were resampled to the Landsat spatial resolution. For comparison, we have randomly selected 2,000 points within the conterminous United States. For each point, we extracted Landsat ARD spectral data and corresponding 16-day clear-sky averages of daily MCD43A4 product for June-August 2018. In total, we collected data for 6,099 samples that contain clear-sky land observations for both Landsat and MODIS. Spectral reflectance for a visible (red) and SWIR bands of Landsat and MODIS shows a close relationship ( Figure 6A,C). NIR band comparison reveal differences between Landsat and MODIS data, with the ARD product consistently underestimating surface reflectance compared to MODIS ( Figure 6B). The mean spectral reflectance difference between Landsat ARD and MODIS NBAR data is −0.006 for the red band (95% Confidence Interval ±0.0008), −0.043 for NIR band (CI ±0.0012), and −0.020 for SWIR band (CI ±0.0012). The differences between MODIS-based and Landsat-based surface reflectance measurements are partially due to the different spatial resolution of the datasets. We suggest that the strong correspondence between MODIS NBAR and normalized Landsat surface reflectance at a large geographic extent confirms the utility of the GLAD ARD product for land cover classification. However, the data users should be aware of the difference between MODIS NBAR and GLAD ARD surface reflectance products that may preclude applications that rely on the precise estimation of surface reflectance. Equation (5). To apply the model, we use the raster layer of distances from the ground track (in meters) that is calculated for each WRS from the Landsat orbital parameters. ρ NORM -normalized surface reflectance; ρ TOA -TOA reflectance; G -gain factor; d -distance from the Landsat ground track; B -bias factor. GLAD ARD normalized surface reflectance is highly correlated to the MODIS surface reflectance data used for normalization model parametrization ( Figure 5). To illustrate the GLAD ARD product properties, we compared the normalized surface reflectance of red, NIR, and SWIR (1.6 μm) spectral bands with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data (MCD43A4). The MODIS NBAR data are collected daily from Terra and Aqua MODIS imagery at 500 m spatial resolution (https://lpdaac.usgs.gov/products/mcd43a4v006/). The MODIS data were resampled to the Landsat spatial resolution. For comparison, we have randomly selected 2,000 points within the conterminous United States. For each point, we extracted Landsat ARD spectral data and corresponding 16-day clear-sky averages of daily MCD43A4 product for June-August 2018. In total, we collected data for 6,099 samples that contain clear-sky land observations for both Landsat and MODIS. Spectral reflectance for a visible (red) and SWIR bands of Landsat and MODIS shows a close relationship ( Figure 6A,C). NIR band comparison reveal differences between Landsat and MODIS data, with the ARD product consistently underestimating surface reflectance compared to MODIS ( Figure 6B). The mean spectral reflectance difference between Landsat ARD and MODIS NBAR data is −0.006 for the red band (95% Confidence Interval ±0.0008), −0.043 for NIR band (CI ±0.0012), and −0.020 for SWIR band (CI ±0.0012). The differences between MODIS-based and Landsat-based surface reflectance measurements are partially due to the different spatial resolution of the datasets. We suggest that the strong correspondence between MODIS NBAR and normalized Landsat surface reflectance at a large geographic extent confirms the utility of the GLAD ARD product for land cover classification. However, the data users should be aware of the difference between MODIS NBAR and GLAD ARD surface reflectance products that may preclude applications that rely on the precise estimation of surface reflectance. Temporal Integration and Tiling The final step of the GLAD ARD processing is a temporal aggregation of individual Landsat images into 16-day composites. The compositing interval was selected corresponding to the Landsat orbital cycle and the MODIS Level 3 data products [34]. The use of a 16-day interval reduces the requirements for data download, storage, and processing compared to daily data aggregation used by the USGS ARD [16] with negligible reduction of usable data, especially outside the USA. The ranges of dates for each interval (Table 5) correspond to the MODIS 16-day dataset [33]. The last interval consists of 13 days (14 days for a leap year). Using a compositing system that is tied to the calendar year simplifies annual data processing and seasonal reflectance comparison. Temporal Integration and Tiling The final step of the GLAD ARD processing is a temporal aggregation of individual Landsat images into 16-day composites. The compositing interval was selected corresponding to the Landsat orbital cycle and the MODIS Level 3 data products [34]. The use of a 16-day interval reduces the requirements for data download, storage, and processing compared to daily data aggregation used by the USGS ARD [16] with negligible reduction of usable data, especially outside the USA. The ranges of dates for each interval (Table 5) correspond to the MODIS 16-day dataset [33]. The last interval consists of 13 days (14 days for a leap year). Using a compositing system that is tied to the calendar year simplifies annual data processing and seasonal reflectance comparison. The 16-day composites are stored in geographic coordinates and organized in the form of 1 × 1 degree tiles (see Section 3). To create a 16-day composite, we first select all images within the date range that overlap a selected 1 × 1 degree tile. All selected images are projected to geographic coordinates using the nearest neighbor resampling method to preserve reflectance values. If more than one image overlaps the composite area, we analyze the QF layers of these images. For each pixel with overlapping images, we select the best observations following this sequence of QF (best to worst): 1-14-11-2-12-6-5-10-9-8-7-4-3 (see QF codes in Table 3). The observation with the best QF is selected. If several observations with the same QF are selected, the per-band mean reflectance value is retained in the composite. The output composite includes six reflective bands, a brightness temperature and a QF band. The QF band value is preserved from the image and modified for values 1, 11, and 14 to record the presence of water in the time-series (see Table 3, QF values [15][16][17]. Effectively, the output 16-day composites represent observation(s) with the highest quality. This does not mean, however, that 16-day data represent a spatially complete clear-sky coverage. No-data gaps are retained in the composites (marked with QF equal zero), and cloud/shadow contaminated observations are retained if no clear-sky observations are available within the corresponding time interval. Global Tile System The GLAD ARD tile system was developed to simplify global data handling. The geographic coordinates using the World Geodetic System (WGS84) were selected as the most universal way of sharing global data. The coordinate system is defined by EPSG Geodetic Parameter Dataset as EPSG:4326 (https://spatialreference.org/ref/epsg/wgs-84/), or using PROJ standard (http://proj.org) as +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs. The nearest neighbor resampling may be used to re-project the geographic data into the original Universal Transverse Mercator (UTM) Landsat pixel grid without distortion, assuming that the output UTM zone is the same as for the source Collection 1 Landsat imagery. The spatial resolution of the ARD dataset is 0.00025 degree per pixel, which corresponds to 27.83 m per pixel on the Equator. The pixel size is a compromise between the need to preserve the original Landsat data pixel size (30 m/pixel) and to avoid using a repeating decimal number for pixel size (which may cause problems with georeference precision). The ARD product is stored in 1 × 1 geographic degrees tiles. The tile format facilitates data handling and the parallelization of data processing. The exact 1 × 1 degree tile format, however, is not optimal for contextual analysis when neighboring pixels are located on different tiles. We implemented a partial solution to this issue by creating a tile system with a 2-pixel overlap. The actual ARD tile size is 4004 × 4004 pixels, an extent of 1.0005 by 1.0005 degrees. The 2-pixel buffer allows implementing contextual analyses using 3 × 3 and 5 × 5 kernels without the need to read data from multiple tiles at a time. Tile names are derived from the tile center, and refer to the integer value of degrees. E.g., the name of a tile with center 17.5E and 52.5N is 017E_52N. The ARD product is only generated for tiles that include ice-free land area and where Landsat Tier 1 data exist (Figure 7). The tile names are used for folder structure only. The tile system can be downloaded from https://glad.umd.edu/ard/home in ESRI Shapefile format. Remote Sens. 2020, 12, 426 12 of 22 Tile names are derived from the tile center, and refer to the integer value of degrees. E.g., the name of a tile with center 17.5E and 52.5N is 017E_52N. The ARD product is only generated for tiles that include ice-free land area and where Landsat Tier 1 data exist (Figure 7). The tile names are used for folder structure only. The tile system can be downloaded from https://glad.umd.edu/ard/home in ESRI Shapefile format. 16-Day Composite Data Each data granule contains observations collected during a single 16-day interval. There are 23 intervals per year (see Table 5 for interval dates). Each interval has a unique numeric identification, starting from the first interval of the year 1980. This identification is used as a file name, while the tile name is used to identify data folders (Section 3.1). Equation (6) The 16-day data for a tile are stored as 8-band, 16-bit unsigned, LZW-compressed GeoTIFF files. The list of image bands is provided in Table 6 (see Table 1 for Landsat band abbreviations and 16-Day Composite Data Each data granule contains observations collected during a single 16-day interval. There are 23 intervals per year (see Table 5 for interval dates). Each interval has a unique numeric identification, starting from the first interval of the year 1980. This identification is used as a file name, while the tile name is used to identify data folders (Section 3.1). Equation (6) shows how to obtain the interval identification number for a selected year and season. ID-interval identification number (file name), Year-selected year (1980 and later), Interval-selected annual 16-day interval (1-23). The 16-day data for a tile are stored as 8-band, 16-bit unsigned, LZW-compressed GeoTIFF files. The list of image bands is provided in Table 6 (see Table 1 for Landsat band abbreviations and wavelengths). The image band 8 consists of an observation quality flag (QF) that reflects the quality of observation used to create the composite. The QF (Table 3) is inherited from the Landsat image which is selected for the 16-day composite. QF values 1, 2 and 15 indicate clear-sky observations. QF values 11-14 and 16-17 are considered clear-sky data with an indication of cloud/shadow proximity. QF values 5 and 6 indicate seasonal data quality issues (topographic shadows and snow cover). These observations may be removed from data processing if the number of clear-sky observations is sufficient. QF values 3, 4, and 7-10 are considered contaminated by clouds and shadows and are usually excluded from data processing. Multi-Temporal Metrics Despite the global radiometric consistency of the 16-day GLAD ARD product, direct application of this dataset as input to a land cover characterization model is hampered by the irregular frequency of clear-sky observation. The availability of clear-sky observations is a function of the Landsat orbital constellation, data acquisition strategy, and cloud cover. As a result, annual 16-day time-series for the same area may have dramatically different numbers of clear-sky observations between seasons and years [19]. While 16-day time-series data contain sufficient information to identify land cover types and land cover dynamics (Figure 8), the inconsistency of observation frequency may not allow calibration of a regional mapping model using solely ARD as source data. The multi-temporal metrics method is a time-series data transformation that improves spatial and temporal consistency, simplifies phenological analysis, and facilitates land cover mapping and change detection at large geographic extents. The metrics approach helps to overcome the inconsistency of clear-sky data availability that is typical for sensors with low observation frequency, such as Landsat. The metrics method was developed in the mid-1980s to extract phenological features from the AVHRR-based NDVI time-series [35,36]. At the same time, the idea of using vegetation index time-series to extract spectral reflectance corresponding to a certain phenological stage was proposed by Holben [37]. Later, both approaches were merged by researchers from the Laboratory for Global Remote Sensing Studies at the University of Maryland [38]. In their work, metrics were calculated by extracting spectral information for specific phenological stages defined by the NDVI annual dynamics. The multi-temporal metrics were widely used for forest monitoring at continental and global scales using MODIS [39] and Landsat data [6,19,20,40]. index time-series to extract spectral reflectance corresponding to a certain phenological stage was proposed by Holben [37]. Later, both approaches were merged by researchers from the Laboratory for Global Remote Sensing Studies at the University of Maryland [38]. In their work, metrics were calculated by extracting spectral information for specific phenological stages defined by the NDVI annual dynamics. The multi-temporal metrics were widely used for forest monitoring at continental and global scales using MODIS [39] and Landsat data [6,19,20,40]. ARD-based multi-temporal metrics represent a set of statistics extracted from a 16-day observation time-series. The metric types and statistical algorithms may vary depending on the mapping objective. Here, we present algorithms for the two most common objectives: annual land cover mapping and detection of land cover changes between two consecutive years. To benefit these ARD-based multi-temporal metrics represent a set of statistics extracted from a 16-day observation time-series. The metric types and statistical algorithms may vary depending on the mapping objective. Here, we present algorithms for the two most common objectives: annual land cover mapping and detection of land cover changes between two consecutive years. To benefit these objectives, we use GLAD ARD data to create two independent sets of multi-temporal metrics: annual phenological metrics and annual change detection metrics. The metric processing tools and supervised classification tools that allow metrics characterization are available at https://glad.umd.edu/ard/home. Annual Phenological Metrics The annual phenological metrics serve as source data for land cover, land use, and vegetation structure mapping models. Metrics represent a set of statistics extracted from the normalized surface reflectance time-series within a corresponding calendar year (January 1-December 31). However, limited and inconsistent data availability in regions with a short snow-free season or frequent cloud cover may preclude consistent phenology characterization by annual observation time-series. To fill long gaps in observation time-series we use the data from the three previous years. Optionally, the gap-filling algorithm can be disabled to create metrics solely from data collected during the corresponding year. The process of phenological metrics processing includes two stages: (1) selecting clear-sky observations and filling gaps in the observation time-series; and (2) extracting reflectance distribution statistics from the time-series of selected observations. First, we compile a gap-filled time-series of annual observations with the lowest atmospheric contamination (Figure 9). The per-pixel criterion for 16-day data selection is defined based on the distribution of QFs within the four years of data. If clear-sky land or water observations are present in the time-series data, only those are used for subsequent analysis. If no such observations are found, the software changes the observation quality threshold for data inclusion, first allowing observations with proximity to clouds and shadows, and then allowing all available observations. To create an annual gap-filled observation time-series for metric extraction, we first analyze the duration of the gaps between existing 16-day clear-sky observations of the corresponding year (Year i). If a gap exceeds two months (four 16-day intervals), we search for the clear-sky observations in the previous years within the gap date range, starting with Year i-1 and until the Year i-3. When clear-sky observations are found, they are added to the gap-filled time-series data, and the gap analysis is performed again until all gaps longer than two months are filled or no available data are found within the four-year interval. in the time-series data, only those are used for subsequent analysis. If no such observations are found, the software changes the observation quality threshold for data inclusion, first allowing observations with proximity to clouds and shadows, and then allowing all available observations. To create an annual gap-filled observation time-series for metric extraction, we first analyze the duration of the gaps between existing 16-day clear-sky observations of the corresponding year (Year i). If a gap exceeds two months (four 16-day intervals), we search for the clear-sky observations in the previous years within the gap date range, starting with Year i-1 and until the Year i-3. When clear-sky observations are found, they are added to the gap-filled time-series data, and the gap analysis is performed again until all gaps longer than two months are filled or no available data are found within the four-year interval. After compilation of the annual gap-filled observation time-series, we compute selected normalized band ratios, or indices, for each selected observation using Equation (7). A spectral variability vegetation index (SVVI, [41]) is also calculated using the standard deviation of spectral reflectance values for each pixel (Equation (8)). NR AB = (ρ A -ρ B )/(ρ A +ρ B ) ×10,000+10,000 (7) After compilation of the annual gap-filled observation time-series, we compute selected normalized band ratios, or indices, for each selected observation using Equation (7). A spectral variability vegetation index (SVVI, [41]) is also calculated using the standard deviation of spectral reflectance values for each pixel (Equation (8)). Multi-temporal metrics are generated from the time-series using two observation date ranking approaches ( Figure 10). First, we rank all observations by each spectral band reflectance or index value individually. From obtained ranks, we select the highest/lowest, second to the highest/lowest, and median reflectance values. Also, we calculate averages for all observations between selected ranks (see Figure 10 for the list of average values). Second, we rank observation dates by corresponding values of (i) NDVI, (ii) SVVI, and (iii) brightness temperature. From these observation date ranks, we extract values corresponding to the highest/lowest, and second to highest/lowest ranks for each of the reflective bands, and calculate average reflectance values between selected ranks. In addition to spectral metrics, the software produces a set of auxiliary layers including the number of clear-sky 16-day composites, observation quality, and water presence per pixel. ranks (see Figure 10 for the list of average values). Second, we rank observation dates by corresponding values of (i) NDVI, (ii) SVVI, and (iii) brightness temperature. From these observation date ranks, we extract values corresponding to the highest/lowest, and second to highest/lowest ranks for each of the reflective bands, and calculate average reflectance values between selected ranks. In addition to spectral metrics, the software produces a set of auxiliary layers including the number of clear-sky 16-day composites, observation quality, and water presence per pixel. Figure 10. Phenological metric types and naming convention (metric names shown in square brackets). The first set of metrics represents statistics calculated from 16-day observation time-series ranked by the spectral reflectance or index value. The ranking performed independently for each spectral band or index. The second set of metrics represents statistics calculated from 16-day observation time-series ranked by the value of corresponding variable (NDVI, SVVI, and brightness temperature). Q1, Q2, and Q3 stand for 1 st , 2 nd , and 3 rd quartiles. * Amplitudes are calculated in memory during classification model application and are not written to the disk. Figure 10. Phenological metric types and naming convention (metric names shown in square brackets). The first set of metrics represents statistics calculated from 16-day observation time-series ranked by the spectral reflectance or index value. The ranking performed independently for each spectral band or index. The second set of metrics represents statistics calculated from 16-day observation time-series ranked by the value of corresponding variable (NDVI, SVVI, and brightness temperature). Q1, Q2, and Q3 stand for 1st, 2nd, and 3rd quartiles. * Amplitudes are calculated in memory during classification model application and are not written to the disk. The metrics are stored as single-band 16-bit unsigned GeoTIFF files using the same tile system as the ARD (see Section 3.1). The metrics set for each tile is stored in a separate folder. The metric naming convention is the following (see Figure 10 for bands, indices and statistics names): YYYY_B_S_C.tif YYYY-Corresponding year. B-Spectral band or index. S-Statistic extracted from the observation time-series. C-Corresponding band or index used for observation ranking (only for metrics extracted from ranks defined by a corresponding value). Example: 2018_blue_max_RN.tif -The metric represents the value of the normalized surface reflectance of the Landsat blue band for the 16-day interval that has the highest red/NIR normalized ratio (also known as NDVI) value during the year 2018. Not all of the metrics are recorded to disk. Specifically, the amplitude metrics are calculated in memory during the classification procedure. To include spatial context to image classification, the focal mean for each of the metric using 3 × 3 kernel is calculated during the classification routine. The large number of multi-temporal metrics (816 metrics in the phenological metrics set described above) is warranted by the large variety of land cover classes that may be mapped using these data. Different metrics and their combination highlight specific features of the surface reflectance and land surface phenology that are required for mapping different land cover types. The simple interquartile reflectance average (average of all values between 1st and 3rd quartiles, each spectral band ranked independently by its value) may serve as a generic clear-sky image composite for a specific year ( Figure 11A). If the observations are ranked by the corresponding NDVI value, and the average is calculated from the top ranks, the composite will represent surface reflectance during the peak of the growing season ( Figure 11B). Metrics extracted from the NDVI and brightness temperature ranks are required for agriculture mapping [23,42,43]. The spectral reflectance amplitudes highlight the land surface phenology and simplify identification of evergreen trees, permanent water features, and crop rotation characterization ( Figure 11C). Using normalized ratios and their phenology facilitates mapping of different land cover types and simplifies visual assessment ( Figure 11D). The metrics are stored as single-band 16-bit unsigned GeoTIFF files using the same tile system as the ARD (see Section 3.1). The metrics set for each tile is stored in a separate folder. The metric naming convention is the following (see Figure 10 for bands, indices and statistics names): YYYY_B_S_C.tif YYYY -Corresponding year. B -Spectral band or index. S -Statistic extracted from the observation time-series. C -Corresponding band or index used for observation ranking (only for metrics extracted from ranks defined by a corresponding value). Example: 2018_blue_max_RN.tif -The metric represents the value of the normalized surface reflectance of the Landsat blue band for the 16-day interval that has the highest red/NIR normalized ratio (also known as NDVI) value during the year 2018. Not all of the metrics are recorded to disk. Specifically, the amplitude metrics are calculated in memory during the classification procedure. To include spatial context to image classification, the focal mean for each of the metric using 3 × 3 kernel is calculated during the classification routine. Annual Change Detection Metrics The annual change detection metrics are designed to facilitate land cover change mapping between the corresponding and previous years while reducing false change detections due to reflectance fluctuations and inconsistent clear-sky observations availability. Change detection metrics describes the surface reflectance within the corresponding and preceding years, spectral reflectance differences between these years, and surface reflectance trend within the time-series. The process of change detection metrics construction includes three stages: (1) selecting clear-sky observations; (2) constructing data time-series, and (3) extracting reflectance and reflectance change distribution statistics from the time-series. To build a set of change detection metrics, we utilize four years of data (one corresponding and three preceding), and select observations with the best available quality. The metric set can be generated with less than four years of data, but at least two consecutive years of data are required. Only observations with the lowest atmospheric contamination are used for metrics extraction. The per-pixel criterion for 16-day data selection is defined automatically based on the distribution of observation quality flags within the four years of data, similar to the phenological metrics algorithm. All other observations are discarded from further processing. To facilitate extraction of the change detection data, we construct four different data time-series (time-series C, P, I, and D, see Figure 12). Time-series C comprised from the clear-sky observations of the corresponding year (Year i). To create a historical baseline for change detection (time-series P), we collect an average reflectance from the three preceding years (Year i-1-Year i-3) only for those 16-day intervals that have clear-sky observations in the time-series C. If no observations are found for a certain 16-day interval in historic data, we use clear-sky data from the closest observation before/after the missing 16-day composite interval. For each observation of time-series C and P, in addition to normalized reflectance, we calculate normalized ratios from selected bands (Equation (7)). Time-series P and C are further aggregated into a single, 46-interval, time-series to calculate trend analysis metrics (time-series I). Finally, the per-16-day interval difference for all spectral band and index values between time-series P and C comprise the time-series D. Remote Sens. 2020, 12, 426 17 of 22 The large number of multi-temporal metrics (816 metrics in the phenological metrics set described above) is warranted by the large variety of land cover classes that may be mapped using these data. Different metrics and their combination highlight specific features of the surface reflectance and land surface phenology that are required for mapping different land cover types. The simple interquartile reflectance average (average of all values between 1 st and 3 rd quartiles, each spectral band ranked independently by its value) may serve as a generic clear-sky image composite for a specific year ( Figure 11A). If the observations are ranked by the corresponding NDVI value, and the average is calculated from the top ranks, the composite will represent surface reflectance during the peak of the growing season ( Figure 11B). Metrics extracted from the NDVI and brightness temperature ranks are required for agriculture mapping [23,42,43]. The spectral reflectance amplitudes highlight the land surface phenology and simplify identification of evergreen trees, permanent water features, and crop rotation characterization ( Figure 11C). Using normalized ratios and their phenology facilitates mapping of different land cover types and simplifies visual assessment ( Figure 11D). Annual Change Detection Metrics The annual change detection metrics are designed to facilitate land cover change mapping between the corresponding and previous years while reducing false change detections due to reflectance fluctuations and inconsistent clear-sky observations availability. Change detection metrics describes the surface reflectance within the corresponding and preceding years, spectral reflectance differences between these years, and surface reflectance trend within the time-series. The process of change detection metrics construction includes three stages: (1) selecting clear-sky observations; (2) constructing data time-series, and (3) extracting reflectance and reflectance change distribution statistics from the time-series. To build a set of change detection metrics, we utilize four years of data (one corresponding and three preceding), and select observations with the best available quality. The metric set can be generated with less than four years of data, but at least two consecutive years of data are required. Only observations with the lowest atmospheric contamination are used for metrics extraction. The per-pixel criterion for 16-day data selection is defined automatically based on the distribution of observation quality flags within the four years of data, similar to the phenological metrics algorithm. All other observations are discarded from further processing. To extract statistics, we use three different approaches ( Figure 13): • For the time-series C and P, we extract two independent sets of metrics that reflect annual phenology. Observations in each time-series are ranked by (a) spectral band or index value, and (b) corresponding NDVI and brightness temperature values. Similar to phenological metrics, we record selected ranks and average between ranks for each spectral variable. • The time-series I is used to analyze unidirectional trend of spectral reflectance within a two-years interval. We use least squares method to fit linear regression model that predicts spectral reflectance or index value from the observation date (date range is from 1 to 46) for clear-sky observations. We record the slope of linear regression as a metric value. In addition, we calculate and record standard deviation of spectral variable within the time-series I. • The time-series D consists of per-16-day interval spectral reflectance or index differences. We rank difference values, and extract a set of statistics (selected ranks and averages) from these ranking. intervals that have clear-sky observations in the time-series C. If no observations are found for a certain 16-day interval in historic data, we use clear-sky data from the closest observation before/after the missing 16-day composite interval. For each observation of time-series C and P, in addition to normalized reflectance, we calculate normalized ratios from selected bands (Equation (7)). Timeseries P and C are further aggregated into a single, 46-interval, time-series to calculate trend analysis metrics (time-series I). Finally, the per-16-day interval difference for all spectral band and index values between time-series P and C comprise the time-series D. To extract statistics, we use three different approaches ( Figure 13): • For the time-series C and P, we extract two independent sets of metrics that reflect annual phenology. Observations in each time-series are ranked by (a) spectral band or index value, and (b) corresponding NDVI and brightness temperature values. Similar to phenological metrics, we record selected ranks and average between ranks for each spectral variable. • The time-series I is used to analyze unidirectional trend of spectral reflectance within a two-years interval. We use least squares method to fit linear regression model that predicts spectral reflectance or index value from the observation date (date range is from 1 to 46) for clear-sky observations. We record the slope of linear regression as a metric value. In addition, we calculate and record standard deviation of spectral variable within the time-series I. • The time-series D consists of per-16-day interval spectral reflectance or index differences. We rank difference values, and extract a set of statistics (selected ranks and averages) from these ranking. Similar to the phenological metrics, each metric is stored as a single-band 16-bit unsigned GeoTIFF file, and metrics are organized in folders named with corresponding tile names. The generic naming convention is the following: YYYY_B_T_S_C.tif Where: YYYY-corresponding year. B-Spectral band or index. T-Time-series from which the statistics were extracted. Index [c] represents the corresponding year (time-series C), [p] stands for the preceding year (time-series P) and [dif] stands for a time-series of per-16-day interval differences (time-series D). Slope of linear regression and standard deviation metrics, which are calculated from the entire time-series, do not have this name section. S-Extracted statistic. C-Corresponding band or index used for ranking (only for metrics extracted from ranks defined by a corresponding value). Example: 2018_blue_c_max_RN.tif -The metric represents the value of the normalized surface reflectance of the Landsat blue band for the 16-day interval that has the highest red/NIR normalized ratio (also known as NDVI) value during the year 2018. The high variability of metrics allows using the generic change detection metric set for the wide spectrum of land cover monitoring applications. Annual metrics for the corresponding and preceding years ( Figure 14A,B) are compared by calculating differences during change detection model parametrization to indicate land cover change. The inter-annual spectral reflectance difference can be visualized by combining the same statistics extracted from different years ( Figure 14C). Metrics that represent the slope of linear regression, and statistics extracted from per-16-day differences ( Figure 14D) provide important information on land cover change [19,20,29]. The high variability of metrics allows using the generic change detection metric set for the wide spectrum of land cover monitoring applications. Annual metrics for the corresponding and preceding years ( Figure 14A,B) are compared by calculating differences during change detection model parametrization to indicate land cover change. The inter-annual spectral reflectance difference can be visualized by combining the same statistics extracted from different years ( Figure 14C). Metrics that Annual change detection metrics serve the operational update of the global forest cover change product that is developed by the GLAD team for Global Forest Watch initiative (www.globalforestwatch. org). The data users should be aware that while using four years of data to create a change detection metrics set improves the classification quality, the metric set is sensitive to changes that happened not only between the corresponding and preceding years, but also between the corresponding year and the years i-2 and i-3. The "last annual observation" metric may be used to exclude changes that happened earlier. Alternatively, a change detection algorithm can be applied annually to eliminate redundant change detections. Known Issues and Limitations The GLAD team is constantly updating the ARD product to ensure data completeness and quality. Here, we list known issues that users should consider when using the product. Some of these issues will be addressed in future revisions of the GLAD ARD. • The current version of the GLAD ARD product is not suitable for real-time land cover monitoring. The ARD product rely on Tier 1 data which currently features up to 26 days processing delay by USGS (https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-1). A 16-day interval data are processed only after all daily data are available as Tier 1. Thus, the ARD update usually features a 1-month delay. • Landsat 5 images for 2001-2002 were filtered for possible sensor artifacts during ARD processing. However, after examining images recently processed to Collection 1 standard we suggested that some of these artifacts were removed by the data provider. A re-processing of the year 2001 and 2002 ARD will be scheduled to include corrected L5 data. • Images crossing the 180 • meridian were not processed due to technical difficulties. This issue was not addressed yet due to low demand for the data in these regions. The images will be processed and the corresponding 16-day composites will be updated later. • Due to low sun azimuth and similarity between snow cover and clouds during winter season in temperate and boreal climates, the GLAD Landsat ARD algorithm is not suitable for winter time image processing above 30N and below 45S Latitude. We are not processing images during times affected by seasonal snow cover so the resulting 16-day intervals have no data. Some of the images (and resulting 16-day composites) may still include snow-covered observations. We suggest further filtering such observations using the "snow/ice" observation quality flag or by removing certain 16-day intervals from data processing. • The surface reflectance normalization is not equal to a physically-based atmospheric and surface anisotropy correction. While the comparison of ARD data with MODIS-based surface reflectance and successful ARD applications for global land cover mapping suggest that the product has sufficient quality for intended use, the ARD data may not be suitable for precise analysis of land surface reflectance. • Users should be aware that the image normalization method is not designed to deal with specular reflectance and thus introduces bias over the water surfaces. We do not recommend using the ARD product for water quality assessment or any other hydrology applications beyond surface water extent mapping. Author Contributions: The GLAD ARD concept and algorithm development, M.C.H., P.P., A.P. and A.T.; parametrization of observation quality assessment models, S.T., B.A., and P.P.; software and data portal development, I.K., A.K., and A.P.; data processing support, Q.Y. All authors have read and agreed to the published version of the manuscript. Funding: The GLAD high-performance computer system and ARD development was supported by funding from NASA Land Cover and Land Use Change, Applied Sciences, and SERVIR programs and USAID CARPE and US Government SilvaCarbon programs.
13,542.2
2020-01-29T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Strangely mined bitcoins: Empirical analysis of anomalies in the bitcoin blockchain transaction network The blockchain technology introduced by bitcoin, with its decentralised peer-to-peer network and cryptographic protocols, provides a public and accessible database of bitcoin transactions that have attracted interest from both economics and network science as an example of a complex evolving monetary network. Despite the known cryptographic guarantees present in the blockchain, there exists significant evidence of inconsistencies and suspicious behavior in the chain. In this paper, we examine the prevalence and evolution of two types of anomalies occurring in coinbase transactions in blockchain mining, which we reported on in earlier research. We further develop our techniques for investigating the impact of these anomalies on the blockchain transaction network, by building networks induced by anomalous coinbase transactions at regular intervals and calculating a range of network measures, including degree correlation and assortativity, as well as inequality in terms of wealth and anomaly ratio using the Gini coefficient. We obtain time series of network measures calculated over the full transaction network and three sub-networks. Inspecting trends in these time series allows us to identify a period in time with particularly strange transaction behavior. We then perform a frequency analysis of this time period to reveal several blocks of highly anomalous transactions. Our technique represents a novel way of using network science to detect and investigate cryptographic anomalies. Introduction Blockchain technology contains both structural and operational properties that are designed to safeguard it, including an underlying open decentralized peer-to-peer network between miners, cryptographic protocols, and validation of transactions between users. Its introduction in 2008 has led to a proliferation of cryptocurrencies over the last decade, pioneered by bitcoin [1]. The bitcoin blockchain contains a complete record of over half a billion bitcoin transactions, between over 46 million digital wallets, stored in 670,000 blocks, representing over 18 million bitcoins. The economic impact of this novel technology and the accompanying financial system is already considerable and it has attracted researchers from various disciplines, compute several network measures for the full network and the sub-networks, updating them on a monthly basis. These network measures allow us to compare both the network characteristics, their structural properties and the distribution of some node properties, such as transaction amount and in-degree. Based on this we are able to show that the basic properties of the sampled sub-networks are similar to the full network, making this a feasible approach to analyse big network data. Furthermore, by looking at their evolution over time, we are able to detect periods that need further investigation. Building on our previous work, where the methodology was first presented [26], here in addition to developing it further, we pay special attention to a particularly unusual time period, early in the blockchain which appears to mark the beginning of deliberate dispersal of bitcoin presumably to create the monetary ecosystem. Our results consequently cast some doubt on the origin story of bitcoin, and clearly identify the period in 2010 when bitcoin's use as a monetary unit appears to have been kick started by a large number of transfers originating from coins mined with the cryptographic anomaly we identified. In the next section we present the methodology we use in this paper, starting with a description of the two anomalies, the sampling techniques developed and network measures, followed by the results in Section, with our results. The paper concludes with a summary of our findings and directions for future work. Materials and methods The methodology of this paper consists of three parts. Firstly, the description of two types of anomalies in coinbase transactions, which is the motivation behind this paper. Secondly, the creation of sub-networks associated with the two anomalies. Finally, the description of network measures which we use to analyse and compare the sub-networks and the full network. Background The now well known origin story of bitcoin is that the technology originated with a posting by a Satoshi Nakamato to the cryptography mailing list in 2008. This was followed by a slow expansion during 2009-10 as early adopters installed mining software and began creating bitcoins, followed by more wide spread adoption following a posting in the slashdot.com online forum in July 2010. Although there has been some question as to whether a single individual could have developed and tested this system, simply due to the range of expertise required, this story has been broadly accepted by researchers. At the end of 2019 we performed a simple frequency analysis of the hexadecimal values (nibbles) by position, in the bitcoin blockchain. The blockchain itself is an 80 byte block header sequence which is used to both cryptographically certify the transactions belonging to any given mined block, and to provide a proof of work target in the form of a nonce which is used by miners to find a block header that can be used to commit a set of bitcoin transactions. This latter is achieved with a 4 byte nonce, effectively a 32 bit unsigned integer which in the public code is repeatedly incremented by the mining software in order to find a value which results in a double SHA256 operation on the block header that gives a value that is less than the difficulty level governing their mining Difficulty levels are continuously adjusted to maintain a constant rate of mining around 10 minutes/block on average. [25]. Whilst parts of the block header are predictable, notably the version, difficulty and most of the timestamp field, the Merkle-Damgård hash, the previous block hash and the nonce should all be randomly distributed, as they are dependent on properties of the SHA256 algorithm. Whilst no frequency distribution anomalies were found in either the Merkle-Damgård or the previous block hash, two distinct anomalous patterns were detected in the nonce which is the key component of the proof of work performed by all miners to obtain bitcoins. The bitcoin proof of work performed by miners is simply to repeatedly calculate two SHA256 functions, one of the block header, and the second on the result of the first SHA256. If the numerical result of the second SHA256 operation is less than that specified by the governing difficulty level, then the miner has found a block that can be linked into the blockchain, and receives a specified amount of new bitcoins as a reward. The two anomalies found with frequency analysis of the individual nibbles of the 2 32 bit nonce field, occur in the first hexadecimal position (nibble) of the block's nonce field as shown in Fig 1B, which in a disproportionate number of blocks has a value in the range 0-3. The second, as shown in Fig 1A is in the penultimate position of the nonce where an abnormal number of 0´s can be seen in the first 18 months of mining. We refer to these as the P (extended Patoshi) anomaly and the Z (Zerononce) anomaly, respectively. Both patterns seem to be associated with the originators of bitcoin. The extended patoshi anomaly in the first nibble of the nonce appears in all of the first 64 blocks mined, and is a notable feature of the first months of mining. This was first noticed by Sergio Lerner who observed this feature as part of an analysis on the extra-nonce behaviour in the first year, and attributed this to mining by Nakamato, which seems apparent from its presence in the first blocks mined. Our analysis however also revealed that it returns between mid 2010-11, 2012-14 and 2016-18 as shown in Fig 1B. The second, "penultimate zero", pattern also occurs almost from the beginning of the blockchain, but appears to only occur once, although a very slightly above expected value for zeros in this field is present from 2016. Although it has been argued online that the patoshi pattern is a consequence of miners evaluating the nonce sequentially, and thus introducing a bias towards lower nonce values, this is not consistent with the expected frequency of valid nonces per block, since in practice these are extremely rare. Courtois et al. (2013) observe that since the nonce value is constrained to 32 bits, the probability of a valid nonce existing for any given block can be expressed as 2 32 difficulty , which would imply on average one valid nonce per block at the easiest difficulty level used until the end of December 2010, and significantly less with the higher difficulty levels used after that. This was verified by an exhaustive search of the nonce space for the first 2000 blocks. [12]. After accounting for the expected number of blocks that would contain these values, (6.25% in the penultimate zero case, and 25% in the patoshi anomaly in the first nibble), we estimate that approximately one third of all coins mined at the first difficulty level are obtained from blocks mined with these features. Across the entire ten years of both patterns, well over 3 million bitcoins appear to have been obtained from blocks with these distinguishing features. The size of these two patterns clearly warrants further investigation to see if additional information can be found in the transactions derived from coins mined in these blocks. Previous research into early transactions in the bitcoin network has thrown up evidence of suspicious clusters, notably Shamir and Dorit's work [5] which discovered a large number of coins being progressively consolidated into a small number of apparently connected wallets, however generally research in this area has not had a clear marker of the blocks themselves on which to attach suspicion. Induced transaction sub-networks One of the contributions of this paper is a methodology for extracting specific sub-networks from the blockchain transaction network. The first step is to prepare the transaction database. For this we extract the entire bitcoin blockchain from origin to November 2019. The data underlying the results presented in the study are publicly available from www.bitcoin.com. We parse the blocks and construct a database of transactions with information about the from wallet and one or more to wallets. Each transaction corresponds to the movement of bitcoin between wallets. The transactions are furthermore marked with their timestamp and the transaction amount. Wallets that received the miner's reward coins (otherwise known as coinbase transactions) from blocks with the two anomalous patterns are marked as tainted. As coins are transferred to other wallets, the percentage taint for each pattern is calculated and updated for the receiving wallet. The transaction database is thus an edge list of timestamped transactions between wallets, together with the transaction amount and the tainted ratio of both P and Z anomaly. We use the edge list to create a directed network. This type of network is also called the bitcoin address network (BAN) [4]. We focus on the BAN in this research, since we want a representation of the raw transactions between addresses. The next step in our methodology is extracting specific networks of interest, more specifically, networks that originate with certain coinbase transactions. The process is as follows. We start from the set of all transactions from the origin of the blockchain, until a given time point and use this data to create a BAN. From this BAN we consider sub-networks induced by specific coinbase transactions. This entails snowball sampling where we start from a set of coinbase transactions, follow their edges to the linked wallets, which are added to the sub-network together with the transactions. Subsequently, any wallet in the full network that is linked via a transaction to one of the most recently added wallet in the sub-network, is also added to the sub-network. This process is repeated until no more transactions can be added. Since the full network is static and directional, the process will terminate. Due to the size of the entire blockchain it is not feasible to build the sub-networks with the snowball sampling technique using all the specific coinbase transactions under consideration. To mitigate this, we choose a random sample from the considered coinbase transaction to start the snowball sampling with. To get more robust results this is repeated several times. In this paper, we apply our proposed methodology to the two anomalies that were identified in the coinbase transactions, namely the Z and the P anomaly, and compare their induced subnetworks to the full network and the sub-network that does not stem from either of the two anomalies. We thus consider three sets of coinbase transactions to induce our sub-networks as listed below. T Z = {cb| The Z anomaly is in the nonce of the cb block} T P = {cb|The P anomaly is in the nonce of the cb block} :Z \ :P = {cb|Neither the Z anomaly nor the P anomaly is in the nonce of the cb block} As a result, we obtain, in addition to the full network -which we refer to as All-three sets of sub-networks, each one induced by the sub-sets of transactions listed above. We refer to these as Tainted Z, Tainted P and Not Tainted Z & Not Tainted P, respectively. We build these subnetworks and the full network incrementally, first using transactions from the origin until January 2010 and then in each iteration adding one more month until May 2012. When inducing each sub-network, we randomly sample 1000 of the respective coinbase transactions and repeat the process ten times. In the Results section, we show the mean of these ten repetitions. When we take a closer look at the last months of 2010, we build the networks at more frequent intervals, with 1-4 days between increments. Network measures The objective of this paper is to compare the structure and properties of the full BAN to the sub-networks induced by tainted and non tainted coinbase transactions. Below, we describe the network measures which we include in our analyses. First we measure basic properties of the networks. The three fundamental measures are the number of nodes, density and diameter [27]. Number of nodes is simply the total number of nodes in the network. Network density is defined as the number of edges divided by the maximum possible number of edges. It gives an indication of how well connected the network is. Finally, network diameter is a measure of the length of the longest shortest path in the network. Given a pair of connected nodes in a network, there is a path between them that is shorter than any other path between them. The diameter is the longest of such paths in the network. Similar to the diameter of a circle, it gives the longest distance to connect any two nodes. In our analyses we calculated the network diameter based on a random sample of 1000 pairs of nodes, because of the time complexity when finding the shortest path between all pairs of nodes. In their study of transaction dynamics in the BAN, Kondor et al. (2014) used the Gini coefficient to quantify inequality in the network [6]. Generally, the Gini coefficient is defined as where {x i } is a monotonically non-decreasing ordered sample of size n. Thus, G = 0 indicates perfect equality, or every observation being equal in terms of the value being considered, whereas G = 1 indicates complete inequality. In this paper we use the Gini coefficient to characterize the heterogeneity of the distribution of in-degree, out-degree, tainted Z ratio, tainted P ratio and transaction amount of the nodes in the full network and sub-networks. Kondor et al. (2014) also investigated structural properties of the network in terms of assortativity and clustering coefficient [6]. Assortativity or degree correlation of the network measures the nodes' tendency to be linked to nodes with a similar degree [27]. It is obtained using the Pearson correlation coefficient of the out-and in-degrees of connected node pairs where for the edge e that links node v from to v to , j out e is the out-degree of node v from and k in e is the in-degree of node v to , σ out and k out are computed in a similar way. An assortative network (where r > 0) is characterized by high degree nodes being linked to other high degree nodes and low degree nodes being linked to other low degree nodes. In contrast, in a disassortative network (r < 0) high degree nodes have a tendency to connect to low degree nodes, creating a hub and spoke structure. The clustering coefficient of a network is defined as the density of triangles in the network, or where Δ v is the number of triangles with node v and d v is the degree of node v. The sum runs over all nodes in the network [27]. To compute C we must ignore the directionality of the network. The clustering coefficient measures how connected the nodes are in their closest neighborhoods. These measures are computed for each full and sub-network as they are incrementally built from month-to-month. As a result we obtain times series showing the development of the networks' properties. Trends in the early years of blockchain We start by looking at the properties of the sub-networks in comparison to the All network. Fig 2 shows the diameter, number of nodes and density for the networks as subsequent months are added. Note the log scale on the y-axis. Firstly, and not surprisingly, the All network has the most nodes, however as we consider a longer timespan, the sizes of the sub-networks grow in the same manner as the All network. Secondly, the density of the sub-networks is higher than that of the All network. This is expected because of the way the sub-networks are constructed. At the beginning, each source node induces an almost fully connected network, but as more nodes are added, the number of edges is proportionally lower, and thus the density decreases. Finally, the diameter is rather fuzzy in the beginning, but as the networks grow in size, the diameter becomes similar for all of them. This indicates that the sub-networks span a similar range as the All network. To conclude, our proposed way of constructing sub-networks induced by a sample of coinbase transactions, seems to generate networks that are comparable to the All network. Next we look at the structural properties of the All and sub-networks, including the distribution of equality. Figs 3 and 4 show the Gini coefficient for in-degree, out-degree, transaction amount, tainted Z and tainted P, on the one hand, and the degree correlation and clustering coefficient, on the other hand, for the All network and each of the three sub-networks as months are added incrementally. In each plot, the red line denotes the whole network. We can see how the values for the sub-network all converge towards to each other and are slowly nearing the red line. The distance between them can probably be attributed to the way the sub-networks are created. Moreover, we see that in the beginning, the in-degree tends to be more equally distributed in the sub-networks than in the whole network, whereas there is an opposite behavior for out-degree, the distribution of out-degree is less equal in the sub-networks. Kondor et al. (2014) speculated that the reason for the Gini being high for in-degree and low for the out-degree, was that at the beginning of the blockchain, people were collecting their coins into one wallet, since they were unable to exchange them [6]. In our case, the reason for the Gini being low for the in-degree and high for the out-degree can be explained by the way the sub-networks are created. When adding a transaction to the sub-network, its prior transactions are not added, so it is expected that the in-degree for all newly added transactions are similar, since new nodes start from 'square zero'. We note that the Gini of the out-degree converges to the full network ahead of the others, implying that the behavior of the first few months is due to the building of the sub-network. Next we look at the Gini coefficient of the Tainted Z and Tainted P ratio. For all sub-networks, the Tainted Z Gini remains higher than in the All network, and they converge early on. This implies that these coinbase transactions get distributed in the transaction network quickly. The Tainted P Gini is higher in the sub-networks at first, but in October 2010, the All network takes over. The Gini of the Tainted Z is higher than that of the Tainted P in the subnetworks and the full network. Regarding the inequality in terms of amount, we see that at the beginning both Tainted P and Tainted Z sub-network have very high values, indicating a very unequal distribution of wealth in these sub-networks. However, the Gini value quickly drops and then remains below the Gini of the full network. We can see from Fig 4 that in 2010 all the networks have a rather high clustering coefficient, which decreases as time goes on. The clustering coefficient is comparable in the All and the sub-networks. The degree correlation fluctuates a lot during the time period we consider, especially in the sub-networks. There it also remains higher than in the full network until early 2011. Both sub-networks of not tainted transactions have a high clustering coefficient in the beginning, whereas all converge to the same low value towards the end of the period. This indicates that the structural properties of the networks we consider vary greatly between themselves and also across time, which gives cause for further investigation. The development of the distribution of inequality in the sub-networks compared to the full network shows how the tainted coinbase transactions blended in with the rest of the transactions in the blockchain. Our analysis helps identify peculiarities in the transaction network at certain moments in time where the transaction network ought to be investigated in more detail. For example, the development of the networks' degree correlations raises questions, because of the varied patterns in the sub-networks. In addition, there is a considerable change in all the measures around November 2010. The tainted Z ratio seems to be least affected by this, however. We will take a closer look at this behavior in the next subsection. A closer look at November 2010 In our analysis so far, we witnessed a shift in both the Gini measures and the network structural measures in the final quarter of 2010. Therefore we will take a closer look at the months October, November and December of 2010. We repeat our analysis from before, this time with smaller time steps and more granularity. Fig 5 shows the Gini values at a more granular level and Fig 6 shows the same for the degree correlation and the clustering coefficient of the full network and the three sub-networks, for the months October, November and December 2010. These values are obtained by increments of 1-5 day in each step. We see from these figures that the shift happens around November 15th and that it is a rather drastic shift. For example, in Fig 5, the in-degree Gini coefficient of the full network changes from close to 0.8 til almost 0.6. For the full network, the Gini decreases in terms of indegree, out-degree and amount, but increases in terms of tainted Z and tainted P. The sub-networks show a similar trend, except for tainted P where their values decrease after the middle of November, in contrast to the full network. The change is more drastic in the sub-networks than in the full network when looking at out-degree, tainted P and amount. It is interesting to look at the development of the tainted P inequality in the tainted P network. Before the shift, it is very high, above 0.5, but it takes a large dive around mid November and is the lowest in all networks. At the same time, the tainted P inequality increases overall, i.e. in the full network. In terms of the structural measures, see Fig 6, the clustering coefficient drops in all networks, and relatively more in the sub-networks than in the full network. This implies that many transactions are being added, which dilutes the ratio of triangles and thus the clustering is reduced. We also see here that the degree correlation fluctuates more than the other measures. The tainted Z and not tainted sub-networks are similar in their trends, with a big increase. However, both the full network and the tainted P sub-network, take a sudden dip on November 15th, then they increase (the increase is bigger in the sub-network) before going down again in the first half of December. This similarity in behavior, again indicates that the P anomaly needs closer inspection. Transaction count analysis. Following this analysis we sampled blocks mined during this period and their associated transactions manually. Another way to examine the evolution of the use of bitcoin as a monetary unit is to simply look at the number of transactions associated with each block. The creation of bitcoin blocks is independent of the number of transactions, the blockchain difficulty level is automatically adjusted to cause bitcoin blocks to be created on average every 10-12 minutes. This, in conjunction with the requirement that all miners must see all transactions that will be committed by the winning block, is what determines the upper limit on the total number of transactions that any block can contain. In later years this is 3-4000 transactions/block. In the first year of mining however the majority of blocks only had a https://doi.org/10.1371/journal.pone.0258001.g008 committed in the same or consecutive blocks, which implies they were made within the same �12 minutes. Following each of these instances there is a marked increase in the average number of transactions until November 2010 when starting on November 15th at 18:45:30 (block height 92037) there is a two week period of bursts of blocks with large numbers of transactions, corresponding exactly with the time period identified by the above network analysis. All of these large bursts of transactions are heavily sourced from tainted coins from both patterns, and manual examination shows interesting and distinguishable characteristics with the transactions in these blocks, notably large numbers of transfers of the same amount, transfers going immediately through a wallet which is never used again, and in the early blocks, notably 51728 and 51729 a series of transfers each precisely 0.01 bitcoins less then the previous one, although originating from different wallets. The earlier and smaller bursts may indicate testing of the software that was presumably used to create these transactions, it seems highly improbable that these were performed manually given the short time frame, and number of transactions made. For example, block 51729 https://www.blockchain.com/btc/block/ 000000001786abd75dc912d8eabe85080c7e822858d445644fa3a3e059c2033b. This activity appears to begin early in 2010, with 6 transactions made on block 35637, shown in Fig 9. There then appear to be three distinct instances of these disbursements in 2010, what appears to be a short burst on 1st April 2010, a larger instance in July following which average transaction activity begins to noticeably increase, culminating with a major set of transactions in November 2010, beginning on the 15th the same period identified by the network analysis as marking a noticeable shift in the Gini coefficient and other measures. Conclusion Analysis of the entire transaction network for any cryptocurrency is prohibitively expensive both in CPU and disk time, frustrating what would otherwise be an ideal target for network science. If this form of monetary unit is to be adopted widely then its integrity must be verifiable. Finding an anomaly in the cryptographic underpinnings is not particularly useful in itself, without being able to investigate how coins related to that anomaly subsequently behaved. In this paper we used network science to look at the evolution of several network measures and distribution of transaction properties in the bitcoin transaction network to investigate the prominence of two anomalies which stem from coinbase transactions. We presented a methodology for constructing sub-networks induced by certain bitcoin transactions using sampling which allowed us to adequately estimate the networks' properties. We compared the networks' structural characteristics to the full network and saw that the distribution of several node properties, such as in-degree, transaction amount and tainted ratio is different in the sub-networks when compared to the full network. This is apparent in the networks until late 2010, when they start to converge to what is observed in the full network. In particular, degree correlation of the sub-network with both anomalies shows a great deviation from the rest at the same time as both these anomalies were prominent in block mining. Based on this information we then examined transactions in the period we had identified more closely, and also performed a simple frequency analysis which clearly illustrated the highly anomalous transaction behaviour around the dates identified by the network analysis. The size of the blockchain and its transactions places a prohibitively high computational complexity on analysing its network behaviour, hence using this approach as a basis for similar methods to compress computation time for block chain transaction analysis is worth exploring. In contrast to anomaly detection methods which aim at detecting specific anomalous transactions, our technique is meant to investigate the entire transaction network with the goal of finding abnormal behavior in its structure, as measured by various network measures. This approach can help narrow down the set of transactions that need to be investigated further as we did in this paper, since it is difficult to label each and every transaction as anomalous or not. Further work is needed to get a better understanding of the networks we examined and the bitcoin transaction network. We saw in our analyses that the more frequent updates of the development of network measures gave more detailed insights, and we could see better when and how the anomalies are having an effect on transaction patterns. We would like to carry out our analyses for the entire blockchain at this more granular level. Also, we have only analysed transactions until mid 2012. In our continued work, our plan is to consider the entire blockchain, and investigate the recurrence of the P anomaly in 2012-13 and 2016-17. Finally, we included only a handful of network measures in our analyses. Many other exist, which could be included in a follow up study.
7,035
2021-09-30T00:00:00.000
[ "Computer Science" ]